Error lines from build-log.txt
... skipping 153 lines ...
I0617 04:38:12.009238 5720 common.go:152] Using cluster name:
I0617 04:38:12.009293 5720 http.go:37] curl https://storage.googleapis.com/kubernetes-release/release/stable-1.23.txt
I0617 04:38:12.073204 5720 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0617 04:38:12.074935 5720 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.24.0-beta.2+v1.24.0-beta.1-120-g5889ff0142/linux/amd64/kops
I0617 04:38:12.901140 5720 up.go:44] Cleaning up any leaked resources from previous cluster
I0617 04:38:12.901554 5720 dumplogs.go:45] /logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kops toolbox dump --name e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user
W0617 04:38:13.411995 5720 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0617 04:38:13.412045 5720 down.go:48] /logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kops delete cluster --name e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --yes
I0617 04:38:13.432730 5752 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0617 04:38:13.432838 5752 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io" not found
I0617 04:38:13.901614 5720 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/17 04:38:13 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0617 04:38:13.910753 5720 http.go:37] curl https://ip.jsb.workers.dev
I0617 04:38:13.977631 5720 up.go:156] /logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kops create cluster --name e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.8 --ssh-public-key /tmp/kops/e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=amazon/amzn2-ami-kernel-5.10-hvm-2.0.20220426.0-x86_64-gp2 --channel=alpha --networking=cilium --container-runtime=docker --admin-access 35.224.48.17/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-1a --master-size c5.large
I0617 04:38:14.001481 5760 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0617 04:38:14.001592 5760 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0617 04:38:14.030872 5760 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/id_ed25519.pub
I0617 04:38:14.528256 5760 new_cluster.go:1168] Cloud Provider ID = aws
... skipping 519 lines ...
I0617 04:38:47.808397 5720 up.go:240] /logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kops validate cluster --name e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0617 04:38:47.831199 5798 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0617 04:38:47.831307 5798 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io
W0617 04:38:49.135726 5798 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:38:59.178515 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:39:09.212013 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:39:19.257962 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:39:29.298426 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:39:39.345582 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:39:49.381001 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:39:59.428846 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:40:09.471989 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:40:19.511146 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:40:29.560907 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:40:39.598279 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:40:49.645771 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:40:59.680770 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:41:09.717742 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:41:19.767913 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:41:29.803976 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0617 04:41:39.835075 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
... skipping 20 lines ...
Pod kube-system/ebs-csi-node-5d654 system-node-critical pod "ebs-csi-node-5d654" is pending
Pod kube-system/ebs-csi-node-gksqk system-node-critical pod "ebs-csi-node-gksqk" is pending
Pod kube-system/ebs-csi-node-qv6jp system-node-critical pod "ebs-csi-node-qv6jp" is pending
Pod kube-system/ebs-csi-node-tt9gz system-node-critical pod "ebs-csi-node-tt9gz" is pending
Pod kube-system/etcd-manager-main-ip-172-20-45-207.eu-west-1.compute.internal system-cluster-critical pod "etcd-manager-main-ip-172-20-45-207.eu-west-1.compute.internal" is pending
Validation Failed
W0617 04:41:52.557883 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
... skipping 23 lines ...
Pod kube-system/ebs-csi-node-5d654 system-node-critical pod "ebs-csi-node-5d654" is pending
Pod kube-system/ebs-csi-node-gksqk system-node-critical pod "ebs-csi-node-gksqk" is pending
Pod kube-system/ebs-csi-node-qv6jp system-node-critical pod "ebs-csi-node-qv6jp" is pending
Pod kube-system/ebs-csi-node-sj72b system-node-critical pod "ebs-csi-node-sj72b" is pending
Pod kube-system/ebs-csi-node-tt9gz system-node-critical pod "ebs-csi-node-tt9gz" is pending
Validation Failed
W0617 04:42:04.573490 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
... skipping 17 lines ...
Pod kube-system/ebs-csi-node-5d654 system-node-critical pod "ebs-csi-node-5d654" is pending
Pod kube-system/ebs-csi-node-gksqk system-node-critical pod "ebs-csi-node-gksqk" is pending
Pod kube-system/ebs-csi-node-qv6jp system-node-critical pod "ebs-csi-node-qv6jp" is pending
Pod kube-system/ebs-csi-node-sj72b system-node-critical pod "ebs-csi-node-sj72b" is pending
Pod kube-system/ebs-csi-node-tt9gz system-node-critical pod "ebs-csi-node-tt9gz" is pending
Validation Failed
W0617 04:42:16.523604 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
... skipping 12 lines ...
Pod kube-system/ebs-csi-controller-6c9c7b4f68-z4dxj system-cluster-critical pod "ebs-csi-controller-6c9c7b4f68-z4dxj" is pending
Pod kube-system/ebs-csi-node-5d654 system-node-critical pod "ebs-csi-node-5d654" is pending
Pod kube-system/ebs-csi-node-qv6jp system-node-critical pod "ebs-csi-node-qv6jp" is pending
Pod kube-system/ebs-csi-node-sj72b system-node-critical pod "ebs-csi-node-sj72b" is pending
Pod kube-system/ebs-csi-node-tt9gz system-node-critical pod "ebs-csi-node-tt9gz" is pending
Validation Failed
W0617 04:42:28.461360 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
... skipping 10 lines ...
Pod kube-system/coredns-5556cb978d-tswwt system-cluster-critical pod "coredns-5556cb978d-tswwt" is pending
Pod kube-system/ebs-csi-node-5d654 system-node-critical pod "ebs-csi-node-5d654" is pending
Pod kube-system/ebs-csi-node-qv6jp system-node-critical pod "ebs-csi-node-qv6jp" is pending
Pod kube-system/ebs-csi-node-sj72b system-node-critical pod "ebs-csi-node-sj72b" is pending
Pod kube-system/ebs-csi-node-tt9gz system-node-critical pod "ebs-csi-node-tt9gz" is pending
Validation Failed
W0617 04:42:40.371366 5798 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-west-1a Master c5.large 1 1 eu-west-1a
nodes-eu-west-1a Node t3.medium 4 4 eu-west-1a
... skipping 257 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 471 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: hostPathSymlink]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 189 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:45:20.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":1,"skipped":5,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:20.283: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 158 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:45:20.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "node-lease-test-4955" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] NodeLease NodeLease should have OwnerReferences set","total":-1,"completed":1,"skipped":4,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:45:21.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-2265" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:21.838: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 11 lines ...
[36mOnly supported for providers [vsphere] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:21.861: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 113 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Jun 17 04:45:20.771: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-ca3098a0-bc60-4f75-ae5f-6b0013fcf901" in namespace "security-context-test-9973" to be "Succeeded or Failed"
Jun 17 04:45:20.880: INFO: Pod "busybox-readonly-true-ca3098a0-bc60-4f75-ae5f-6b0013fcf901": Phase="Pending", Reason="", readiness=false. Elapsed: 108.682526ms
Jun 17 04:45:22.995: INFO: Pod "busybox-readonly-true-ca3098a0-bc60-4f75-ae5f-6b0013fcf901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223843111s
Jun 17 04:45:25.100: INFO: Pod "busybox-readonly-true-ca3098a0-bc60-4f75-ae5f-6b0013fcf901": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329316268s
Jun 17 04:45:27.206: INFO: Pod "busybox-readonly-true-ca3098a0-bc60-4f75-ae5f-6b0013fcf901": Phase="Failed", Reason="", readiness=false. Elapsed: 6.435161478s
Jun 17 04:45:27.206: INFO: Pod "busybox-readonly-true-ca3098a0-bc60-4f75-ae5f-6b0013fcf901" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:45:27.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-9973" for this suite.
... skipping 32 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m
when running a container with a new image
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266[0m
should be able to pull image [NodeConformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:28.037: INFO: Only supported for providers [gce gke] (not aws)
... skipping 68 lines ...
Jun 17 04:45:20.259: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test env composition
Jun 17 04:45:20.794: INFO: Waiting up to 5m0s for pod "var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd" in namespace "var-expansion-7462" to be "Succeeded or Failed"
Jun 17 04:45:20.907: INFO: Pod "var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 113.010985ms
Jun 17 04:45:23.014: INFO: Pod "var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219901658s
Jun 17 04:45:25.121: INFO: Pod "var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327252125s
Jun 17 04:45:27.228: INFO: Pod "var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.433673557s
[1mSTEP[0m: Saw pod success
Jun 17 04:45:27.228: INFO: Pod "var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd" satisfied condition "Succeeded or Failed"
Jun 17 04:45:27.350: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:45:27.678: INFO: Waiting for pod var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd to disappear
Jun 17 04:45:27.784: INFO: Pod var-expansion-9baf5f53-72d0-4e41-a67d-5bb8b33c9ebd no longer exists
[AfterEach] [sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 27 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:28.140: INFO: Only supported for providers [azure] (not aws)
... skipping 187 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
version v1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74[0m
A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:28.811: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
[32m• [SLOW TEST:9.641 seconds][0m
[sig-auth] ServiceAccounts
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m
should ensure a single API token exists
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":1,"skipped":0,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:29.419: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 114 lines ...
Jun 17 04:45:20.207: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium
Jun 17 04:45:20.743: INFO: Waiting up to 5m0s for pod "pod-ebbb8e64-745e-4374-967a-b21e217b4db2" in namespace "emptydir-1256" to be "Succeeded or Failed"
Jun 17 04:45:20.860: INFO: Pod "pod-ebbb8e64-745e-4374-967a-b21e217b4db2": Phase="Pending", Reason="", readiness=false. Elapsed: 117.491074ms
Jun 17 04:45:22.970: INFO: Pod "pod-ebbb8e64-745e-4374-967a-b21e217b4db2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226967985s
Jun 17 04:45:25.079: INFO: Pod "pod-ebbb8e64-745e-4374-967a-b21e217b4db2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335915796s
Jun 17 04:45:27.187: INFO: Pod "pod-ebbb8e64-745e-4374-967a-b21e217b4db2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444443587s
Jun 17 04:45:29.295: INFO: Pod "pod-ebbb8e64-745e-4374-967a-b21e217b4db2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.552033353s
[1mSTEP[0m: Saw pod success
Jun 17 04:45:29.295: INFO: Pod "pod-ebbb8e64-745e-4374-967a-b21e217b4db2" satisfied condition "Succeeded or Failed"
Jun 17 04:45:29.402: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-ebbb8e64-745e-4374-967a-b21e217b4db2 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:45:29.638: INFO: Waiting for pod pod-ebbb8e64-745e-4374-967a-b21e217b4db2 to disappear
Jun 17 04:45:29.748: INFO: Pod pod-ebbb8e64-745e-4374-967a-b21e217b4db2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 40 lines ...
[32m• [SLOW TEST:10.611 seconds][0m
[sig-apps] DisruptionController
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
should update/patch PodDisruptionBudget status [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:30.505: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[36mDriver hostPath doesn't support ext3 -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:45:27.541: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename custom-resource-definition
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:45:31.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "custom-resource-definition-7614" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:31.888: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 27 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-e598d724-5320-4ed8-90b4-867acc76f754
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 17 04:45:20.884: INFO: Waiting up to 5m0s for pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b" in namespace "configmap-3746" to be "Succeeded or Failed"
Jun 17 04:45:20.995: INFO: Pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b": Phase="Pending", Reason="", readiness=false. Elapsed: 110.228309ms
Jun 17 04:45:23.101: INFO: Pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216379036s
Jun 17 04:45:25.209: INFO: Pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324415392s
Jun 17 04:45:27.319: INFO: Pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434953102s
Jun 17 04:45:29.426: INFO: Pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542075822s
Jun 17 04:45:31.533: INFO: Pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.64866907s
[1mSTEP[0m: Saw pod success
Jun 17 04:45:31.533: INFO: Pod "pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b" satisfied condition "Succeeded or Failed"
Jun 17 04:45:31.639: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:45:31.859: INFO: Waiting for pod pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b to disappear
Jun 17 04:45:31.965: INFO: Pod pod-configmaps-a18ac645-1a66-4161-8d7f-d65d7bbb217b no longer exists
[AfterEach] [sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.481 seconds][0m
[sig-storage] ConfigMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:32.295: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 44 lines ...
[32m• [SLOW TEST:12.926 seconds][0m
[sig-storage] ConfigMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
optional updates should be reflected in volume [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 41 lines ...
[32m• [SLOW TEST:14.450 seconds][0m
[sig-apps] ReplicaSet
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
should validate Replicaset Status endpoints [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:45:34.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "disruption-4375" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":2,"skipped":25,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:34.332: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 208 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
One pod requesting one prebound PVC
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m
should be able to mount volume and read from pod1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:38.319: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 80 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m
Kubectl replace
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1570[0m
should update a single-container pod's image [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:38.418: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 42 lines ...
[1mSTEP[0m: Destroying namespace "services-4800" for this suite.
[AfterEach] [sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:39.151: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 91 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":8,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:41.059: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 82 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 199 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m
Guestbook application
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339[0m
should create and stop a working application [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:51.109: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 166 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
One pod requesting one prebound PVC
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m
should be able to mount volume and read from pod1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":15,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 62 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m
Simple pod
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m
should support exec through an HTTP proxy
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":1,"skipped":28,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:52.089: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 151 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
Two pods mounting a local volume at the same time
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m
should be able to write from pod1 and read from pod2
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":14,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:54.254: INFO: Only supported for providers [azure] (not aws)
... skipping 89 lines ...
[32m• [SLOW TEST:24.607 seconds][0m
[sig-node] Probing container
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:56.994: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 156 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
One pod requesting one prebound PVC
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m
should be able to mount volume and write from pod1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":12,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:45:59.973: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
Jun 17 04:45:37.626: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:37.733: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:38.265: INFO: Unable to read jessie_udp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:38.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:38.483: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:38.592: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:39.023: INFO: Lookups using dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380 failed for: [wheezy_udp@dns-test-service.dns-136.svc.cluster.local wheezy_tcp@dns-test-service.dns-136.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_udp@dns-test-service.dns-136.svc.cluster.local jessie_tcp@dns-test-service.dns-136.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local]
Jun 17 04:45:44.129: INFO: Unable to read wheezy_udp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:44.249: INFO: Unable to read wheezy_tcp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:44.391: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:44.501: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:45.033: INFO: Unable to read jessie_udp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:45.138: INFO: Unable to read jessie_tcp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:45.244: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:45.350: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:45.787: INFO: Lookups using dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380 failed for: [wheezy_udp@dns-test-service.dns-136.svc.cluster.local wheezy_tcp@dns-test-service.dns-136.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_udp@dns-test-service.dns-136.svc.cluster.local jessie_tcp@dns-test-service.dns-136.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local]
Jun 17 04:45:49.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:49.240: INFO: Unable to read wheezy_tcp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:49.349: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:49.454: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:49.997: INFO: Unable to read jessie_udp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:50.103: INFO: Unable to read jessie_tcp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:50.209: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:50.316: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:50.754: INFO: Lookups using dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380 failed for: [wheezy_udp@dns-test-service.dns-136.svc.cluster.local wheezy_tcp@dns-test-service.dns-136.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_udp@dns-test-service.dns-136.svc.cluster.local jessie_tcp@dns-test-service.dns-136.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local]
Jun 17 04:45:54.129: INFO: Unable to read wheezy_udp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:54.237: INFO: Unable to read wheezy_tcp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:54.345: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:54.463: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:55.000: INFO: Unable to read jessie_udp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:55.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:55.216: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:55.324: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local from pod dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380: the server could not find the requested resource (get pods dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380)
Jun 17 04:45:55.778: INFO: Lookups using dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380 failed for: [wheezy_udp@dns-test-service.dns-136.svc.cluster.local wheezy_tcp@dns-test-service.dns-136.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_udp@dns-test-service.dns-136.svc.cluster.local jessie_tcp@dns-test-service.dns-136.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-136.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-136.svc.cluster.local]
Jun 17 04:46:00.763: INFO: DNS probes using dns-136/dns-test-9325d7d4-0cc8-4680-a6b6-b71e989c5380 succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test service
[1mSTEP[0m: deleting the test headless service
... skipping 6 lines ...
[32m• [SLOW TEST:39.418 seconds][0m
[sig-network] DNS
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should provide DNS for services [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:01.348: INFO: Only supported for providers [openstack] (not aws)
... skipping 64 lines ...
[32m• [SLOW TEST:41.053 seconds][0m
[sig-network] Conntrack
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should be able to preserve UDP traffic when initial unready endpoints get ready
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:293[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready","total":-1,"completed":2,"skipped":18,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PV Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 23 lines ...
Jun 17 04:46:03.017: INFO: AfterEach: Cleaning up test resources.
Jun 17 04:46:03.017: INFO: pvc is nil
Jun 17 04:46:03.017: INFO: Deleting PersistentVolume "hostpath-zvhdh"
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":3,"skipped":24,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:03.148: INFO: Driver local doesn't support ext4 -- skipping
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating secret with name secret-test-map-fdc79a78-1879-4cce-bd39-ef9a3b41a164
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 17 04:45:49.812: INFO: Waiting up to 5m0s for pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46" in namespace "secrets-2650" to be "Succeeded or Failed"
Jun 17 04:45:49.921: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46": Phase="Pending", Reason="", readiness=false. Elapsed: 109.125268ms
Jun 17 04:45:52.027: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215691929s
Jun 17 04:45:54.134: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322326919s
Jun 17 04:45:56.241: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429606536s
Jun 17 04:45:58.348: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536343507s
Jun 17 04:46:00.462: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649931854s
Jun 17 04:46:02.569: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.757270561s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:02.569: INFO: Pod "pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46" satisfied condition "Succeeded or Failed"
Jun 17 04:46:02.675: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46 container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:02.911: INFO: Waiting for pod pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46 to disappear
Jun 17 04:46:03.019: INFO: Pod pod-secrets-2426e20d-a15b-47e6-865d-9893dd27be46 no longer exists
[AfterEach] [sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:14.399 seconds][0m
[sig-storage] Secrets
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:03.244: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 47 lines ...
Jun 17 04:45:39.072: INFO: Unable to read jessie_udp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:39.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:39.293: INFO: Unable to read jessie_udp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:39.404: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:39.547: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:39.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:40.105: INFO: Lookups using dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7633 wheezy_tcp@dns-test-service.dns-7633 wheezy_udp@dns-test-service.dns-7633.svc wheezy_tcp@dns-test-service.dns-7633.svc wheezy_udp@_http._tcp.dns-test-service.dns-7633.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7633.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7633 jessie_tcp@dns-test-service.dns-7633 jessie_udp@dns-test-service.dns-7633.svc jessie_tcp@dns-test-service.dns-7633.svc jessie_udp@_http._tcp.dns-test-service.dns-7633.svc jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc]
Jun 17 04:45:45.216: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:45.324: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:45.432: INFO: Unable to read wheezy_udp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:45.540: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:45.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
... skipping 5 lines ...
Jun 17 04:45:46.744: INFO: Unable to read jessie_udp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:46.855: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:46.962: INFO: Unable to read jessie_udp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:47.070: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:47.180: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:47.288: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:47.723: INFO: Lookups using dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7633 wheezy_tcp@dns-test-service.dns-7633 wheezy_udp@dns-test-service.dns-7633.svc wheezy_tcp@dns-test-service.dns-7633.svc wheezy_udp@_http._tcp.dns-test-service.dns-7633.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7633.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7633 jessie_tcp@dns-test-service.dns-7633 jessie_udp@dns-test-service.dns-7633.svc jessie_tcp@dns-test-service.dns-7633.svc jessie_udp@_http._tcp.dns-test-service.dns-7633.svc jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc]
Jun 17 04:45:50.213: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:50.321: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:50.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:50.544: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:50.651: INFO: Unable to read wheezy_udp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
... skipping 5 lines ...
Jun 17 04:45:51.737: INFO: Unable to read jessie_udp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:51.848: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:51.956: INFO: Unable to read jessie_udp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:52.081: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:52.189: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:52.298: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:52.745: INFO: Lookups using dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7633 wheezy_tcp@dns-test-service.dns-7633 wheezy_udp@dns-test-service.dns-7633.svc wheezy_tcp@dns-test-service.dns-7633.svc wheezy_udp@_http._tcp.dns-test-service.dns-7633.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7633.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7633 jessie_tcp@dns-test-service.dns-7633 jessie_udp@dns-test-service.dns-7633.svc jessie_tcp@dns-test-service.dns-7633.svc jessie_udp@_http._tcp.dns-test-service.dns-7633.svc jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc]
Jun 17 04:45:55.218: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:55.326: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:55.449: INFO: Unable to read wheezy_udp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:55.582: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:55.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
... skipping 5 lines ...
Jun 17 04:45:56.782: INFO: Unable to read jessie_udp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:56.890: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633 from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:57.001: INFO: Unable to read jessie_udp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:57.109: INFO: Unable to read jessie_tcp@dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:57.218: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:57.345: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc from pod dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788: the server could not find the requested resource (get pods dns-test-34965c32-a70f-478f-8bae-e993c01b4788)
Jun 17 04:45:57.889: INFO: Lookups using dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7633 wheezy_tcp@dns-test-service.dns-7633 wheezy_udp@dns-test-service.dns-7633.svc wheezy_tcp@dns-test-service.dns-7633.svc wheezy_udp@_http._tcp.dns-test-service.dns-7633.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7633.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7633 jessie_tcp@dns-test-service.dns-7633 jessie_udp@dns-test-service.dns-7633.svc jessie_tcp@dns-test-service.dns-7633.svc jessie_udp@_http._tcp.dns-test-service.dns-7633.svc jessie_tcp@_http._tcp.dns-test-service.dns-7633.svc]
Jun 17 04:46:02.738: INFO: DNS probes using dns-7633/dns-test-34965c32-a70f-478f-8bae-e993c01b4788 succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test service
[1mSTEP[0m: deleting the test headless service
... skipping 118 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 17 04:45:55.213: INFO: Waiting up to 5m0s for pod "security-context-62e443de-ff30-420f-952e-e77def39902a" in namespace "security-context-5304" to be "Succeeded or Failed"
Jun 17 04:45:55.318: INFO: Pod "security-context-62e443de-ff30-420f-952e-e77def39902a": Phase="Pending", Reason="", readiness=false. Elapsed: 104.700038ms
Jun 17 04:45:57.424: INFO: Pod "security-context-62e443de-ff30-420f-952e-e77def39902a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210228029s
Jun 17 04:45:59.537: INFO: Pod "security-context-62e443de-ff30-420f-952e-e77def39902a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323983163s
Jun 17 04:46:01.643: INFO: Pod "security-context-62e443de-ff30-420f-952e-e77def39902a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42973538s
Jun 17 04:46:03.753: INFO: Pod "security-context-62e443de-ff30-420f-952e-e77def39902a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539334158s
Jun 17 04:46:05.859: INFO: Pod "security-context-62e443de-ff30-420f-952e-e77def39902a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.645420812s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:05.859: INFO: Pod "security-context-62e443de-ff30-420f-952e-e77def39902a" satisfied condition "Succeeded or Failed"
Jun 17 04:46:05.964: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod security-context-62e443de-ff30-420f-952e-e77def39902a container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:06.192: INFO: Waiting for pod security-context-62e443de-ff30-420f-952e-e77def39902a to disappear
Jun 17 04:46:06.297: INFO: Pod security-context-62e443de-ff30-420f-952e-e77def39902a no longer exists
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.144 seconds][0m
[sig-node] Security Context
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m
should support seccomp default which is unconfined [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":3,"skipped":39,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 9 lines ...
Jun 17 04:45:35.025: INFO: Creating resource for dynamic PV
Jun 17 04:45:35.025: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi}
[1mSTEP[0m: creating a StorageClass volume-expand-1388fvsg6
[1mSTEP[0m: creating a claim
[1mSTEP[0m: Expanding non-expandable pvc
Jun 17 04:45:35.345: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI}
Jun 17 04:45:35.568: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:37.781: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:39.806: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:41.784: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:43.782: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:45.781: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:47.782: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:49.787: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:51.780: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:53.781: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:55.783: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:57.784: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:45:59.780: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:46:01.780: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:46:03.780: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:46:05.782: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-1388fvsg6",
... // 3 identical fields
}
Jun 17 04:46:05.994: INFO: Error updating pvc awsr5rgd: PersistentVolumeClaim "awsr5rgd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 26 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should not allow expansion of pvcs without AllowVolumeExpansion property
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":6,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":33,"failed":0}
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:45:54.535: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jun 17 04:45:55.373: INFO: Waiting up to 5m0s for pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20" in namespace "security-context-8031" to be "Succeeded or Failed"
Jun 17 04:45:55.486: INFO: Pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20": Phase="Pending", Reason="", readiness=false. Elapsed: 112.95677ms
Jun 17 04:45:57.593: INFO: Pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220106569s
Jun 17 04:45:59.698: INFO: Pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325155165s
Jun 17 04:46:01.803: INFO: Pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430292358s
Jun 17 04:46:03.909: INFO: Pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536275232s
Jun 17 04:46:06.016: INFO: Pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.642881241s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:06.016: INFO: Pod "security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20" satisfied condition "Succeeded or Failed"
Jun 17 04:46:06.120: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:06.346: INFO: Waiting for pod security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20 to disappear
Jun 17 04:46:06.453: INFO: Pod security-context-359d1196-e9ea-4c79-96bd-d6ac8ae9ac20 no longer exists
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 54 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should resize volume when PVC is edited while pod is using it
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 102 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
CSI attach test using mock driver
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332[0m
should require VolumeAttach for ephemermal volume and drivers with attachment
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:360[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:08.922: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: emptydir]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mDriver emptydir doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 100 lines ...
[32m• [SLOW TEST:10.176 seconds][0m
[sig-api-machinery] ResourceQuota
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:1423[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.","total":-1,"completed":3,"skipped":10,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:13.458: INFO: Driver local doesn't support ext3 -- skipping
... skipping 95 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: block]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 47 lines ...
[32m• [SLOW TEST:12.700 seconds][0m
[sig-api-machinery] ResourceQuota
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a persistent volume claim
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:480[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":4,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 152 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (filesystem volmode)] volumeMode
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":22,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support readOnly directory specified in the volumeMount
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Jun 17 04:46:00.721: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 17 04:46:00.721: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-qrbc
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:46:00.828: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qrbc" in namespace "provisioning-4897" to be "Succeeded or Failed"
Jun 17 04:46:00.933: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 105.071309ms
Jun 17 04:46:03.042: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213429027s
Jun 17 04:46:05.148: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319701232s
Jun 17 04:46:07.253: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425166165s
Jun 17 04:46:09.360: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531485806s
Jun 17 04:46:11.465: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63701377s
Jun 17 04:46:13.573: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.745034421s
Jun 17 04:46:15.679: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.850650753s
Jun 17 04:46:17.786: INFO: Pod "pod-subpath-test-inlinevolume-qrbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.957766573s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:17.786: INFO: Pod "pod-subpath-test-inlinevolume-qrbc" satisfied condition "Succeeded or Failed"
Jun 17 04:46:17.891: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-qrbc container test-container-subpath-inlinevolume-qrbc: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:18.110: INFO: Waiting for pod pod-subpath-test-inlinevolume-qrbc to disappear
Jun 17 04:46:18.215: INFO: Pod pod-subpath-test-inlinevolume-qrbc no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-qrbc
Jun 17 04:46:18.215: INFO: Deleting pod "pod-subpath-test-inlinevolume-qrbc" in namespace "provisioning-4897"
... skipping 12 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:18.650: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 80 lines ...
[36mDriver emptydir doesn't support PreprovisionedPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:03.423: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename dns
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
[32m• [SLOW TEST:16.832 seconds][0m
[sig-network] DNS
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should resolve DNS of partial qualified names for the cluster [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [sig-node] AppArmor
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:20.268: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename apparmor
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 141 lines ...
Jun 17 04:45:59.327: INFO: PersistentVolumeClaim pvc-72mbw found but phase is Pending instead of Bound.
Jun 17 04:46:01.432: INFO: PersistentVolumeClaim pvc-72mbw found and phase=Bound (14.852420898s)
Jun 17 04:46:01.433: INFO: Waiting up to 3m0s for PersistentVolume local-9db9q to have phase Bound
Jun 17 04:46:01.539: INFO: PersistentVolume local-9db9q found and phase=Bound (106.892514ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9txg
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:46:01.855: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9txg" in namespace "provisioning-4468" to be "Succeeded or Failed"
Jun 17 04:46:01.960: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 105.300076ms
Jun 17 04:46:04.066: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210551863s
Jun 17 04:46:06.171: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315732638s
Jun 17 04:46:08.276: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420890268s
Jun 17 04:46:10.382: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527111824s
Jun 17 04:46:12.487: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.632149336s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:12.487: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg" satisfied condition "Succeeded or Failed"
Jun 17 04:46:12.592: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-9txg container test-container-subpath-preprovisionedpv-9txg: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:12.819: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9txg to disappear
Jun 17 04:46:12.924: INFO: Pod pod-subpath-test-preprovisionedpv-9txg no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9txg
Jun 17 04:46:12.924: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9txg" in namespace "provisioning-4468"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9txg
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:46:13.140: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9txg" in namespace "provisioning-4468" to be "Succeeded or Failed"
Jun 17 04:46:13.244: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 104.555476ms
Jun 17 04:46:15.350: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210535078s
Jun 17 04:46:17.456: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315849507s
Jun 17 04:46:19.562: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.422332936s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:19.562: INFO: Pod "pod-subpath-test-preprovisionedpv-9txg" satisfied condition "Succeeded or Failed"
Jun 17 04:46:19.667: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-9txg container test-container-subpath-preprovisionedpv-9txg: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:19.884: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9txg to disappear
Jun 17 04:46:19.989: INFO: Pod pod-subpath-test-preprovisionedpv-9txg no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9txg
Jun 17 04:46:19.989: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9txg" in namespace "provisioning-4468"
... skipping 21 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":39,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:21.544: INFO: Only supported for providers [vsphere] (not aws)
... skipping 49 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jun 17 04:45:29.030: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 04:45:29.246: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5587" in namespace "provisioning-5587" to be "Succeeded or Failed"
Jun 17 04:45:29.351: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 105.747924ms
Jun 17 04:45:31.458: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212782982s
Jun 17 04:45:33.573: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326965378s
Jun 17 04:45:35.679: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432853927s
Jun 17 04:45:37.785: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539637918s
Jun 17 04:45:39.892: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646052652s
Jun 17 04:45:41.998: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 12.752575706s
Jun 17 04:45:44.105: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 14.859652683s
Jun 17 04:45:46.211: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.96573634s
[1mSTEP[0m: Saw pod success
Jun 17 04:45:46.211: INFO: Pod "hostpath-symlink-prep-provisioning-5587" satisfied condition "Succeeded or Failed"
Jun 17 04:45:46.212: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5587" in namespace "provisioning-5587"
Jun 17 04:45:46.321: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5587" to be fully deleted
Jun 17 04:45:46.427: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-858x
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:45:46.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-858x" in namespace "provisioning-5587" to be "Succeeded or Failed"
Jun 17 04:45:46.639: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Pending", Reason="", readiness=false. Elapsed: 105.589783ms
Jun 17 04:45:48.747: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213592004s
Jun 17 04:45:50.853: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=true. Elapsed: 4.319647223s
Jun 17 04:45:52.960: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=true. Elapsed: 6.426996372s
Jun 17 04:45:55.068: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=true. Elapsed: 8.534155369s
Jun 17 04:45:57.175: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=true. Elapsed: 10.641556968s
... skipping 3 lines ...
Jun 17 04:46:05.602: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=true. Elapsed: 19.068423232s
Jun 17 04:46:07.709: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=true. Elapsed: 21.175751878s
Jun 17 04:46:09.816: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=false. Elapsed: 23.283017248s
Jun 17 04:46:11.923: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Running", Reason="", readiness=false. Elapsed: 25.389385948s
Jun 17 04:46:14.030: INFO: Pod "pod-subpath-test-inlinevolume-858x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.496809352s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:14.030: INFO: Pod "pod-subpath-test-inlinevolume-858x" satisfied condition "Succeeded or Failed"
Jun 17 04:46:14.136: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-858x container test-container-subpath-inlinevolume-858x: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:14.357: INFO: Waiting for pod pod-subpath-test-inlinevolume-858x to disappear
Jun 17 04:46:14.463: INFO: Pod pod-subpath-test-inlinevolume-858x no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-858x
Jun 17 04:46:14.463: INFO: Deleting pod "pod-subpath-test-inlinevolume-858x" in namespace "provisioning-5587"
[1mSTEP[0m: Deleting pod
Jun 17 04:46:14.569: INFO: Deleting pod "pod-subpath-test-inlinevolume-858x" in namespace "provisioning-5587"
Jun 17 04:46:14.784: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5587" in namespace "provisioning-5587" to be "Succeeded or Failed"
Jun 17 04:46:14.890: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 105.972064ms
Jun 17 04:46:16.997: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212531465s
Jun 17 04:46:19.106: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321277011s
Jun 17 04:46:21.213: INFO: Pod "hostpath-symlink-prep-provisioning-5587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428399878s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:21.213: INFO: Pod "hostpath-symlink-prep-provisioning-5587" satisfied condition "Succeeded or Failed"
Jun 17 04:46:21.213: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5587" in namespace "provisioning-5587"
Jun 17 04:46:21.321: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5587" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:21.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-5587" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":31,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:21.699: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
Jun 17 04:45:44.066: INFO: PersistentVolumeClaim pvc-dnmgc found but phase is Pending instead of Bound.
Jun 17 04:45:46.173: INFO: PersistentVolumeClaim pvc-dnmgc found and phase=Bound (14.866942094s)
Jun 17 04:45:46.174: INFO: Waiting up to 3m0s for PersistentVolume aws-mwblg to have phase Bound
Jun 17 04:45:46.280: INFO: PersistentVolume aws-mwblg found and phase=Bound (106.281944ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-6f2m
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:45:46.602: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6f2m" in namespace "volume-7834" to be "Succeeded or Failed"
Jun 17 04:45:46.708: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 106.055936ms
Jun 17 04:45:48.817: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215117689s
Jun 17 04:45:50.924: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3216092s
Jun 17 04:45:53.030: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428085528s
Jun 17 04:45:55.138: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535589068s
Jun 17 04:45:57.244: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641883686s
... skipping 4 lines ...
Jun 17 04:46:07.792: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 21.18914444s
Jun 17 04:46:09.899: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 23.296965308s
Jun 17 04:46:12.007: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 25.404981318s
Jun 17 04:46:14.115: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Pending", Reason="", readiness=false. Elapsed: 27.512180458s
Jun 17 04:46:16.222: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.619479339s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:16.222: INFO: Pod "exec-volume-test-preprovisionedpv-6f2m" satisfied condition "Succeeded or Failed"
Jun 17 04:46:16.328: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-6f2m container exec-container-preprovisionedpv-6f2m: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:16.566: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6f2m to disappear
Jun 17 04:46:16.677: INFO: Pod exec-volume-test-preprovisionedpv-6f2m no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-6f2m
Jun 17 04:46:16.677: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6f2m" in namespace "volume-7834"
[1mSTEP[0m: Deleting pv and pvc
Jun 17 04:46:16.783: INFO: Deleting PersistentVolumeClaim "pvc-dnmgc"
Jun 17 04:46:16.890: INFO: Deleting PersistentVolume "aws-mwblg"
Jun 17 04:46:17.184: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0b1ae48488f6316ef", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0b1ae48488f6316ef is currently attached to i-03b2797f05a8100f1
status code: 400, request id: 520cc2e7-5b75-43bb-b1f8-1760d7902423
Jun 17 04:46:22.837: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0b1ae48488f6316ef".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:22.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-7834" for this suite.
... skipping 16 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs
Jun 17 04:46:18.697: INFO: Waiting up to 5m0s for pod "pod-632ce3f7-5284-41db-95a9-2ee38f87823b" in namespace "emptydir-889" to be "Succeeded or Failed"
Jun 17 04:46:18.803: INFO: Pod "pod-632ce3f7-5284-41db-95a9-2ee38f87823b": Phase="Pending", Reason="", readiness=false. Elapsed: 105.799451ms
Jun 17 04:46:20.911: INFO: Pod "pod-632ce3f7-5284-41db-95a9-2ee38f87823b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21324247s
Jun 17 04:46:23.017: INFO: Pod "pod-632ce3f7-5284-41db-95a9-2ee38f87823b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319927383s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:23.017: INFO: Pod "pod-632ce3f7-5284-41db-95a9-2ee38f87823b" satisfied condition "Succeeded or Failed"
Jun 17 04:46:23.123: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-632ce3f7-5284-41db-95a9-2ee38f87823b container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:23.348: INFO: Waiting for pod pod-632ce3f7-5284-41db-95a9-2ee38f87823b to disappear
Jun 17 04:46:23.454: INFO: Pod pod-632ce3f7-5284-41db-95a9-2ee38f87823b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:5.831 seconds][0m
[sig-storage] EmptyDir volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:23.683: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 45 lines ...
[36mraw block volumes cannot be read-only[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:175
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:45:37.895: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
[32m• [SLOW TEST:47.954 seconds][0m
[sig-apps] Job
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
should remove pods when job is deleted
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:191[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":2,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:25.860: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:27.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "networkpolicies-6087" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":33,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:27.682: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[32m• [SLOW TEST:12.625 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should mutate configmap [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":6,"skipped":27,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating configMap with name configmap-test-volume-cb6ef895-c8ff-4fdc-b37c-2feb3966429b
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 17 04:46:22.672: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3" in namespace "configmap-7387" to be "Succeeded or Failed"
Jun 17 04:46:22.778: INFO: Pod "pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3": Phase="Pending", Reason="", readiness=false. Elapsed: 105.898722ms
Jun 17 04:46:24.887: INFO: Pod "pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215320545s
Jun 17 04:46:26.994: INFO: Pod "pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321923808s
Jun 17 04:46:29.100: INFO: Pod "pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428204485s
Jun 17 04:46:31.206: INFO: Pod "pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534216726s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:31.206: INFO: Pod "pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3" satisfied condition "Succeeded or Failed"
Jun 17 04:46:31.312: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3 container configmap-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:31.536: INFO: Waiting for pod pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3 to disappear
Jun 17 04:46:31.646: INFO: Pod pod-configmaps-2a54153e-3ab8-4a69-9551-cc20c8fcace3 no longer exists
[AfterEach] [sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:10.144 seconds][0m
[sig-storage] ConfigMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 130 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (ext4)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should store data
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":2,"skipped":12,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:05.885: INFO: >>> kubeConfig: /root/.kube/config
... skipping 5 lines ...
Jun 17 04:46:06.634: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
[1mSTEP[0m: creating a test aws volume
Jun 17 04:46:07.422: INFO: Successfully created a new PD: "aws://eu-west-1a/vol-0eab5e307912464c1".
Jun 17 04:46:07.422: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-d7nv
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:46:07.535: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-d7nv" in namespace "volume-9975" to be "Succeeded or Failed"
Jun 17 04:46:07.643: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 107.229388ms
Jun 17 04:46:09.751: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215372861s
Jun 17 04:46:11.858: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322691964s
Jun 17 04:46:13.966: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430882277s
Jun 17 04:46:16.074: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538424487s
Jun 17 04:46:18.182: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646566094s
Jun 17 04:46:20.310: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.774428003s
Jun 17 04:46:22.417: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.881231725s
Jun 17 04:46:24.525: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.989208522s
Jun 17 04:46:26.632: INFO: Pod "exec-volume-test-inlinevolume-d7nv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.096810262s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:26.632: INFO: Pod "exec-volume-test-inlinevolume-d7nv" satisfied condition "Succeeded or Failed"
Jun 17 04:46:26.739: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod exec-volume-test-inlinevolume-d7nv container exec-container-inlinevolume-d7nv: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:26.967: INFO: Waiting for pod exec-volume-test-inlinevolume-d7nv to disappear
Jun 17 04:46:27.074: INFO: Pod exec-volume-test-inlinevolume-d7nv no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-d7nv
Jun 17 04:46:27.074: INFO: Deleting pod "exec-volume-test-inlinevolume-d7nv" in namespace "volume-9975"
Jun 17 04:46:27.377: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0eab5e307912464c1", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0eab5e307912464c1 is currently attached to i-00cb91d9735ab5447
status code: 400, request id: 09254d1d-96fd-41dc-83f5-81fce9ac3b7b
Jun 17 04:46:33.087: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0eab5e307912464c1".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:33.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-9975" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (ext4)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 3 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 17 04:46:22.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17" in namespace "downward-api-6834" to be "Succeeded or Failed"
Jun 17 04:46:22.371: INFO: Pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 107.44666ms
Jun 17 04:46:24.479: INFO: Pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215054183s
Jun 17 04:46:26.588: INFO: Pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32426372s
Jun 17 04:46:28.697: INFO: Pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433808977s
Jun 17 04:46:30.806: INFO: Pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542513415s
Jun 17 04:46:32.914: INFO: Pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.650239446s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:32.914: INFO: Pod "downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17" satisfied condition "Succeeded or Failed"
Jun 17 04:46:33.021: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:33.243: INFO: Waiting for pod downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17 to disappear
Jun 17 04:46:33.351: INFO: Pod downwardapi-volume-bcd4b486-cef4-4d99-b67d-3044cec8cf17 no longer exists
[AfterEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.183 seconds][0m
[sig-storage] Downward API volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should provide podname only [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] health handlers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 10 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:34.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "health-1852" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:34.644: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 80 lines ...
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:03.461: INFO: >>> kubeConfig: /root/.kube/config
... skipping 7 lines ...
Jun 17 04:46:04.201: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass volume-390j6c4d
[1mSTEP[0m: creating a claim
Jun 17 04:46:04.306: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-k876
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:46:04.627: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-k876" in namespace "volume-390" to be "Succeeded or Failed"
Jun 17 04:46:04.733: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 105.43329ms
Jun 17 04:46:06.839: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211483618s
Jun 17 04:46:08.945: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318200944s
Jun 17 04:46:11.054: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427068276s
Jun 17 04:46:13.160: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533408549s
Jun 17 04:46:15.266: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 10.639125604s
Jun 17 04:46:17.373: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 12.745748773s
Jun 17 04:46:19.479: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 14.851523619s
Jun 17 04:46:21.590: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Pending", Reason="", readiness=false. Elapsed: 16.962458641s
Jun 17 04:46:23.696: INFO: Pod "exec-volume-test-dynamicpv-k876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.069307856s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:23.696: INFO: Pod "exec-volume-test-dynamicpv-k876" satisfied condition "Succeeded or Failed"
Jun 17 04:46:23.802: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod exec-volume-test-dynamicpv-k876 container exec-container-dynamicpv-k876: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:24.028: INFO: Waiting for pod exec-volume-test-dynamicpv-k876 to disappear
Jun 17 04:46:24.134: INFO: Pod exec-volume-test-dynamicpv-k876 no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-k876
Jun 17 04:46:24.134: INFO: Deleting pod "exec-volume-test-dynamicpv-k876" in namespace "volume-390"
... skipping 17 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (default fs)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:35.206: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 132 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 17 04:46:26.782: INFO: Waiting up to 5m0s for pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab" in namespace "security-context-3630" to be "Succeeded or Failed"
Jun 17 04:46:26.889: INFO: Pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab": Phase="Pending", Reason="", readiness=false. Elapsed: 106.649575ms
Jun 17 04:46:28.996: INFO: Pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214516316s
Jun 17 04:46:31.104: INFO: Pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321941101s
Jun 17 04:46:33.213: INFO: Pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431126115s
Jun 17 04:46:35.320: INFO: Pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538270039s
Jun 17 04:46:37.427: INFO: Pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.645546012s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:37.428: INFO: Pod "security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab" satisfied condition "Succeeded or Failed"
Jun 17 04:46:37.535: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:37.773: INFO: Waiting for pod security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab to disappear
Jun 17 04:46:37.879: INFO: Pod security-context-80ecff94-56e3-4872-9050-fd8b8cf330ab no longer exists
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.180 seconds][0m
[sig-node] Security Context
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m
should support seccomp runtime/default [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:38.113: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 11 lines ...
[36mOnly supported for providers [openstack] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:45:30.081: INFO: >>> kubeConfig: /root/.kube/config
... skipping 114 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":2,"skipped":3,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:41.350: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 129 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:42.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-6807" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 27 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
CustomResourceDefinition Watch
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42[0m
watch on custom resource definition objects [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:43.195: INFO: Only supported for providers [gce gke] (not aws)
... skipping 22 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs
Jun 17 04:46:32.243: INFO: Waiting up to 5m0s for pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08" in namespace "emptydir-4689" to be "Succeeded or Failed"
Jun 17 04:46:32.348: INFO: Pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08": Phase="Pending", Reason="", readiness=false. Elapsed: 104.743125ms
Jun 17 04:46:34.454: INFO: Pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211460602s
Jun 17 04:46:36.560: INFO: Pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317210926s
Jun 17 04:46:38.667: INFO: Pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424378048s
Jun 17 04:46:40.774: INFO: Pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530860655s
Jun 17 04:46:42.880: INFO: Pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.637191055s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:42.880: INFO: Pod "pod-8fdfa068-8e1d-4852-9088-de0c30377c08" satisfied condition "Succeeded or Failed"
Jun 17 04:46:42.990: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-8fdfa068-8e1d-4852-9088-de0c30377c08 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:43.211: INFO: Waiting for pod pod-8fdfa068-8e1d-4852-9088-de0c30377c08 to disappear
Jun 17 04:46:43.316: INFO: Pod pod-8fdfa068-8e1d-4852-9088-de0c30377c08 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.137 seconds][0m
[sig-storage] EmptyDir volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":29,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:43.549: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 97 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: gluster]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
[90m------------------------------[0m
... skipping 152 lines ...
Jun 17 04:46:13.600: INFO: PersistentVolumeClaim pvc-4fqhh found but phase is Pending instead of Bound.
Jun 17 04:46:15.705: INFO: PersistentVolumeClaim pvc-4fqhh found and phase=Bound (2.212452233s)
Jun 17 04:46:15.705: INFO: Waiting up to 3m0s for PersistentVolume local-xtplq to have phase Bound
Jun 17 04:46:15.810: INFO: PersistentVolume local-xtplq found and phase=Bound (104.872171ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-2z6n
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:46:16.126: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2z6n" in namespace "provisioning-3414" to be "Succeeded or Failed"
Jun 17 04:46:16.231: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Pending", Reason="", readiness=false. Elapsed: 104.743966ms
Jun 17 04:46:18.339: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212414747s
Jun 17 04:46:20.450: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323425476s
Jun 17 04:46:22.555: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428759505s
Jun 17 04:46:24.662: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535417449s
Jun 17 04:46:26.767: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Running", Reason="", readiness=true. Elapsed: 10.640806182s
... skipping 3 lines ...
Jun 17 04:46:35.192: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Running", Reason="", readiness=true. Elapsed: 19.065852617s
Jun 17 04:46:37.298: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Running", Reason="", readiness=true. Elapsed: 21.171300941s
Jun 17 04:46:39.403: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Running", Reason="", readiness=true. Elapsed: 23.276667452s
Jun 17 04:46:41.510: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Running", Reason="", readiness=false. Elapsed: 25.38363289s
Jun 17 04:46:43.617: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.490523266s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:43.617: INFO: Pod "pod-subpath-test-preprovisionedpv-2z6n" satisfied condition "Succeeded or Failed"
Jun 17 04:46:43.722: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-2z6n container test-container-subpath-preprovisionedpv-2z6n: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:43.950: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2z6n to disappear
Jun 17 04:46:44.056: INFO: Pod pod-subpath-test-preprovisionedpv-2z6n no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-2z6n
Jun 17 04:46:44.056: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2z6n" in namespace "provisioning-3414"
... skipping 61 lines ...
[32m• [SLOW TEST:9.825 seconds][0m
[sig-node] Pods
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:45.759: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename containers
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:48.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "containers-1002" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:49.153: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 169 lines ...
Jun 17 04:46:31.312: INFO: Waiting for pod aws-client to disappear
Jun 17 04:46:31.421: INFO: Pod aws-client no longer exists
[1mSTEP[0m: cleaning the environment after aws
[1mSTEP[0m: Deleting pv and pvc
Jun 17 04:46:31.422: INFO: Deleting PersistentVolumeClaim "pvc-fcdzx"
Jun 17 04:46:31.529: INFO: Deleting PersistentVolume "aws-h5l6g"
Jun 17 04:46:32.318: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0de8d427c61364c5c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0de8d427c61364c5c is currently attached to i-03b2797f05a8100f1
status code: 400, request id: d49299e6-a9c8-4fab-9500-356b6e10015f
Jun 17 04:46:37.928: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0de8d427c61364c5c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0de8d427c61364c5c is currently attached to i-03b2797f05a8100f1
status code: 400, request id: 9701f555-0ed3-474a-be27-8657a1dd92d9
Jun 17 04:46:43.453: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0de8d427c61364c5c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0de8d427c61364c5c is currently attached to i-03b2797f05a8100f1
status code: 400, request id: 85356b80-39ad-4932-b128-b92e78fbe0a9
Jun 17 04:46:49.048: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0de8d427c61364c5c".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:49.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-1992" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (block volmode)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should store data
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:49.407: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":3,"skipped":33,"failed":0}
[BeforeEach] [sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:06.682: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pod-network-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 57 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23[0m
Granular Checks: Pods
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30[0m
should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:49.562: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 54 lines ...
Jun 17 04:45:46.150: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Jun 17 04:45:48.149: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Jun 17 04:45:48.260: INFO: Running '/logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1375 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Jun 17 04:45:49.351: INFO: rc: 7
Jun 17 04:45:49.460: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jun 17 04:45:49.567: INFO: Pod kube-proxy-mode-detector no longer exists
Jun 17 04:45:49.567: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1375 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:
stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7
error:
exit status 7
[1mSTEP[0m: creating service affinity-clusterip-timeout in namespace services-1375
[1mSTEP[0m: creating replication controller affinity-clusterip-timeout in namespace services-1375
I0617 04:45:49.791895 6584 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1375, replica count: 3
I0617 04:45:52.945165 6584 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
I0617 04:45:55.946333 6584 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
... skipping 48 lines ...
[32m• [SLOW TEST:71.140 seconds][0m
[sig-network] Services
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:52.240: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 150 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:52.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "proxy-7742" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":5,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:52.887: INFO: Driver emptydir doesn't support ext4 -- skipping
... skipping 88 lines ...
Jun 17 04:46:11.692: INFO: PersistentVolumeClaim csi-hostpath4rjrr found but phase is Pending instead of Bound.
Jun 17 04:46:13.799: INFO: PersistentVolumeClaim csi-hostpath4rjrr found but phase is Pending instead of Bound.
Jun 17 04:46:15.907: INFO: PersistentVolumeClaim csi-hostpath4rjrr found but phase is Pending instead of Bound.
Jun 17 04:46:18.013: INFO: PersistentVolumeClaim csi-hostpath4rjrr found and phase=Bound (14.853476173s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ntnf
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:46:18.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ntnf" in namespace "provisioning-9246" to be "Succeeded or Failed"
Jun 17 04:46:18.437: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 105.862036ms
Jun 17 04:46:20.544: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212281305s
Jun 17 04:46:22.650: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319069535s
Jun 17 04:46:24.758: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426643636s
Jun 17 04:46:26.865: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533256117s
Jun 17 04:46:28.971: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640086025s
Jun 17 04:46:31.078: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.746885567s
Jun 17 04:46:33.190: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.858592062s
Jun 17 04:46:35.297: INFO: Pod "pod-subpath-test-dynamicpv-ntnf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.96559638s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:35.297: INFO: Pod "pod-subpath-test-dynamicpv-ntnf" satisfied condition "Succeeded or Failed"
Jun 17 04:46:35.403: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-ntnf container test-container-subpath-dynamicpv-ntnf: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:35.635: INFO: Waiting for pod pod-subpath-test-dynamicpv-ntnf to disappear
Jun 17 04:46:35.750: INFO: Pod pod-subpath-test-dynamicpv-ntnf no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-ntnf
Jun 17 04:46:35.750: INFO: Deleting pod "pod-subpath-test-dynamicpv-ntnf" in namespace "provisioning-9246"
... skipping 60 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:53.350: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:56.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-5989" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":4,"skipped":37,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:56.853: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
[32m• [SLOW TEST:8.424 seconds][0m
[sig-node] Pods
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should be updated [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":32,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] crictl
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 101 lines ...
Jun 17 04:46:43.807: INFO: PersistentVolumeClaim pvc-jcgmc found but phase is Pending instead of Bound.
Jun 17 04:46:45.915: INFO: PersistentVolumeClaim pvc-jcgmc found and phase=Bound (6.43012246s)
Jun 17 04:46:45.916: INFO: Waiting up to 3m0s for PersistentVolume local-n98pv to have phase Bound
Jun 17 04:46:46.022: INFO: PersistentVolume local-n98pv found and phase=Bound (106.544012ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-rzrc
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:46:46.343: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-rzrc" in namespace "volume-7313" to be "Succeeded or Failed"
Jun 17 04:46:46.453: INFO: Pod "exec-volume-test-preprovisionedpv-rzrc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.448846ms
Jun 17 04:46:48.562: INFO: Pod "exec-volume-test-preprovisionedpv-rzrc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219342919s
Jun 17 04:46:50.672: INFO: Pod "exec-volume-test-preprovisionedpv-rzrc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32901734s
Jun 17 04:46:52.782: INFO: Pod "exec-volume-test-preprovisionedpv-rzrc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438679671s
Jun 17 04:46:54.891: INFO: Pod "exec-volume-test-preprovisionedpv-rzrc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.547903694s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:54.891: INFO: Pod "exec-volume-test-preprovisionedpv-rzrc" satisfied condition "Succeeded or Failed"
Jun 17 04:46:55.013: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-rzrc container exec-container-preprovisionedpv-rzrc: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:55.250: INFO: Waiting for pod exec-volume-test-preprovisionedpv-rzrc to disappear
Jun 17 04:46:55.357: INFO: Pod exec-volume-test-preprovisionedpv-rzrc no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-rzrc
Jun 17 04:46:55.357: INFO: Deleting pod "exec-volume-test-preprovisionedpv-rzrc" in namespace "volume-7313"
... skipping 32 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":19,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-instrumentation] Events API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 22 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:46:59.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "events-4848" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:46:59.269: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
Jun 17 04:46:53.756: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0f794970-30e8-4962-b52e-27eeb33cfaff" in namespace "security-context-test-7793" to be "Succeeded or Failed"
Jun 17 04:46:53.860: INFO: Pod "busybox-user-65534-0f794970-30e8-4962-b52e-27eeb33cfaff": Phase="Pending", Reason="", readiness=false. Elapsed: 104.241662ms
Jun 17 04:46:55.965: INFO: Pod "busybox-user-65534-0f794970-30e8-4962-b52e-27eeb33cfaff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209707898s
Jun 17 04:46:58.071: INFO: Pod "busybox-user-65534-0f794970-30e8-4962-b52e-27eeb33cfaff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315040229s
Jun 17 04:47:00.176: INFO: Pod "busybox-user-65534-0f794970-30e8-4962-b52e-27eeb33cfaff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.420682531s
Jun 17 04:47:00.176: INFO: Pod "busybox-user-65534-0f794970-30e8-4962-b52e-27eeb33cfaff" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:00.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-7793" for this suite.
... skipping 2 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
When creating a container with runAsUser
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50[0m
should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:00.409: INFO: Only supported for providers [gce gke] (not aws)
... skipping 84 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:31.880: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a job
[1mSTEP[0m: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:00.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "job-4414" for this suite.
[32m• [SLOW TEST:29.176 seconds][0m
[sig-apps] Job
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:01.114: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 119 lines ...
Jun 17 04:46:29.847: INFO: PersistentVolumeClaim pvc-zn6d8 found but phase is Pending instead of Bound.
Jun 17 04:46:31.953: INFO: PersistentVolumeClaim pvc-zn6d8 found and phase=Bound (14.85630856s)
Jun 17 04:46:31.953: INFO: Waiting up to 3m0s for PersistentVolume local-qmw4h to have phase Bound
Jun 17 04:46:32.060: INFO: PersistentVolume local-qmw4h found and phase=Bound (106.477282ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tm2f
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:46:32.377: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tm2f" in namespace "provisioning-1146" to be "Succeeded or Failed"
Jun 17 04:46:32.487: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 110.138447ms
Jun 17 04:46:34.594: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217676279s
Jun 17 04:46:36.701: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324536236s
Jun 17 04:46:38.808: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431456476s
Jun 17 04:46:40.914: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537243273s
Jun 17 04:46:43.022: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.645380473s
Jun 17 04:46:45.128: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.751663276s
Jun 17 04:46:47.235: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.858554522s
Jun 17 04:46:49.344: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.967048037s
Jun 17 04:46:51.467: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.090459537s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:51.467: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f" satisfied condition "Succeeded or Failed"
Jun 17 04:46:51.572: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tm2f container test-container-subpath-preprovisionedpv-tm2f: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:51.795: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tm2f to disappear
Jun 17 04:46:51.900: INFO: Pod pod-subpath-test-preprovisionedpv-tm2f no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tm2f
Jun 17 04:46:51.900: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tm2f" in namespace "provisioning-1146"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tm2f
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:46:52.130: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tm2f" in namespace "provisioning-1146" to be "Succeeded or Failed"
Jun 17 04:46:52.236: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 105.716024ms
Jun 17 04:46:54.341: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211383609s
Jun 17 04:46:56.448: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317758772s
Jun 17 04:46:58.554: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424333793s
[1mSTEP[0m: Saw pod success
Jun 17 04:46:58.554: INFO: Pod "pod-subpath-test-preprovisionedpv-tm2f" satisfied condition "Succeeded or Failed"
Jun 17 04:46:58.660: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tm2f container test-container-subpath-preprovisionedpv-tm2f: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:46:58.883: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tm2f to disappear
Jun 17 04:46:58.988: INFO: Pod pod-subpath-test-preprovisionedpv-tm2f no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tm2f
Jun 17 04:46:58.988: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tm2f" in namespace "provisioning-1146"
... skipping 30 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":24,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:02.008: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:01.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "events-3116" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 24 lines ...
Jun 17 04:46:30.350: INFO: PersistentVolumeClaim pvc-tnt7n found but phase is Pending instead of Bound.
Jun 17 04:46:32.458: INFO: PersistentVolumeClaim pvc-tnt7n found and phase=Bound (14.861921614s)
Jun 17 04:46:32.458: INFO: Waiting up to 3m0s for PersistentVolume local-7vcv2 to have phase Bound
Jun 17 04:46:32.569: INFO: PersistentVolume local-7vcv2 found and phase=Bound (111.186237ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-46kd
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:46:32.895: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-46kd" in namespace "provisioning-6744" to be "Succeeded or Failed"
Jun 17 04:46:33.002: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Pending", Reason="", readiness=false. Elapsed: 106.527043ms
Jun 17 04:46:35.110: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214880509s
Jun 17 04:46:37.217: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321794111s
Jun 17 04:46:39.325: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Running", Reason="", readiness=true. Elapsed: 6.429532364s
Jun 17 04:46:41.436: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Running", Reason="", readiness=true. Elapsed: 8.540224387s
Jun 17 04:46:43.543: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Running", Reason="", readiness=true. Elapsed: 10.647581757s
... skipping 3 lines ...
Jun 17 04:46:52.012: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Running", Reason="", readiness=true. Elapsed: 19.116386236s
Jun 17 04:46:54.121: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Running", Reason="", readiness=true. Elapsed: 21.225591358s
Jun 17 04:46:56.228: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Running", Reason="", readiness=true. Elapsed: 23.332937575s
Jun 17 04:46:58.335: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Running", Reason="", readiness=false. Elapsed: 25.439481578s
Jun 17 04:47:00.442: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.546524896s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:00.442: INFO: Pod "pod-subpath-test-preprovisionedpv-46kd" satisfied condition "Succeeded or Failed"
Jun 17 04:47:00.552: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-46kd container test-container-subpath-preprovisionedpv-46kd: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:00.788: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-46kd to disappear
Jun 17 04:47:00.894: INFO: Pod pod-subpath-test-preprovisionedpv-46kd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-46kd
Jun 17 04:47:00.894: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-46kd" in namespace "provisioning-6744"
... skipping 21 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":25,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:02.518: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
Two pods mounting a local volume at the same time
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m
should be able to write from pod1 and read from pod2
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":35,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:04.670: INFO: Only supported for providers [gce gke] (not aws)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: hostPath]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mDriver hostPath doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 7 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-b46efa52-0a64-489e-80b1-c17d0b6bf42f
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 17 04:47:00.249: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96" in namespace "projected-7532" to be "Succeeded or Failed"
Jun 17 04:47:00.355: INFO: Pod "pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96": Phase="Pending", Reason="", readiness=false. Elapsed: 106.018276ms
Jun 17 04:47:02.462: INFO: Pod "pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213211981s
Jun 17 04:47:04.569: INFO: Pod "pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.320187319s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:04.570: INFO: Pod "pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96" satisfied condition "Succeeded or Failed"
Jun 17 04:47:04.675: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:04.958: INFO: Waiting for pod pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96 to disappear
Jun 17 04:47:05.064: INFO: Pod pod-projected-configmaps-56a6849a-0da0-4dd6-ac6a-53fed8a55d96 no longer exists
[AfterEach] [sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:5.985 seconds][0m
[sig-storage] Projected configMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":47,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:05.301: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 107 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":2,"skipped":14,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:05.666: INFO: Only supported for providers [vsphere] (not aws)
... skipping 111 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
Two pods mounting a local volume one after the other
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m
should be able to write from pod1 and read from pod2
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:58.685: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium
Jun 17 04:46:59.526: INFO: Waiting up to 5m0s for pod "pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5" in namespace "emptydir-362" to be "Succeeded or Failed"
Jun 17 04:46:59.631: INFO: Pod "pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 104.516019ms
Jun 17 04:47:01.747: INFO: Pod "pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220542919s
Jun 17 04:47:03.852: INFO: Pod "pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326029204s
Jun 17 04:47:05.958: INFO: Pod "pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431551723s
Jun 17 04:47:08.063: INFO: Pod "pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.536669913s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:08.063: INFO: Pod "pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5" satisfied condition "Succeeded or Failed"
Jun 17 04:47:08.167: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:08.386: INFO: Waiting for pod pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5 to disappear
Jun 17 04:47:08.490: INFO: Pod pod-1578df0f-6b0b-4d75-a110-8bf580f90dc5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:10.016 seconds][0m
[sig-storage] EmptyDir volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support existing directory
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jun 17 04:47:03.305: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 04:47:03.412: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-vm2d
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:47:03.521: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vm2d" in namespace "provisioning-6157" to be "Succeeded or Failed"
Jun 17 04:47:03.627: INFO: Pod "pod-subpath-test-inlinevolume-vm2d": Phase="Pending", Reason="", readiness=false. Elapsed: 106.160087ms
Jun 17 04:47:05.735: INFO: Pod "pod-subpath-test-inlinevolume-vm2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213832759s
Jun 17 04:47:07.847: INFO: Pod "pod-subpath-test-inlinevolume-vm2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325831511s
Jun 17 04:47:09.955: INFO: Pod "pod-subpath-test-inlinevolume-vm2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433814612s
Jun 17 04:47:12.062: INFO: Pod "pod-subpath-test-inlinevolume-vm2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.540389943s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:12.062: INFO: Pod "pod-subpath-test-inlinevolume-vm2d" satisfied condition "Succeeded or Failed"
Jun 17 04:47:12.168: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-vm2d container test-container-volume-inlinevolume-vm2d: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:12.391: INFO: Waiting for pod pod-subpath-test-inlinevolume-vm2d to disappear
Jun 17 04:47:12.497: INFO: Pod pod-subpath-test-inlinevolume-vm2d no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-vm2d
Jun 17 04:47:12.497: INFO: Deleting pod "pod-subpath-test-inlinevolume-vm2d" in namespace "provisioning-6157"
... skipping 12 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:12.949: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 43 lines ...
Jun 17 04:46:59.253: INFO: PersistentVolumeClaim pvc-6p7x6 found but phase is Pending instead of Bound.
Jun 17 04:47:01.359: INFO: PersistentVolumeClaim pvc-6p7x6 found and phase=Bound (2.211716037s)
Jun 17 04:47:01.359: INFO: Waiting up to 3m0s for PersistentVolume local-m9wxc to have phase Bound
Jun 17 04:47:01.465: INFO: PersistentVolume local-m9wxc found and phase=Bound (105.75845ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-txvh
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:47:01.786: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-txvh" in namespace "volume-372" to be "Succeeded or Failed"
Jun 17 04:47:01.892: INFO: Pod "exec-volume-test-preprovisionedpv-txvh": Phase="Pending", Reason="", readiness=false. Elapsed: 106.590179ms
Jun 17 04:47:04.000: INFO: Pod "exec-volume-test-preprovisionedpv-txvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213992369s
Jun 17 04:47:06.106: INFO: Pod "exec-volume-test-preprovisionedpv-txvh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320469511s
Jun 17 04:47:08.213: INFO: Pod "exec-volume-test-preprovisionedpv-txvh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427277862s
Jun 17 04:47:10.320: INFO: Pod "exec-volume-test-preprovisionedpv-txvh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534339766s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:10.320: INFO: Pod "exec-volume-test-preprovisionedpv-txvh" satisfied condition "Succeeded or Failed"
Jun 17 04:47:10.426: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-txvh container exec-container-preprovisionedpv-txvh: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:10.659: INFO: Waiting for pod exec-volume-test-preprovisionedpv-txvh to disappear
Jun 17 04:47:10.774: INFO: Pod exec-volume-test-preprovisionedpv-txvh no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-txvh
Jun 17 04:47:10.774: INFO: Deleting pod "exec-volume-test-preprovisionedpv-txvh" in namespace "volume-372"
... skipping 38 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs
Jun 17 04:47:02.999: INFO: Waiting up to 5m0s for pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18" in namespace "emptydir-3060" to be "Succeeded or Failed"
Jun 17 04:47:03.104: INFO: Pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18": Phase="Pending", Reason="", readiness=false. Elapsed: 104.125681ms
Jun 17 04:47:05.214: INFO: Pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214808841s
Jun 17 04:47:07.321: INFO: Pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321227888s
Jun 17 04:47:09.426: INFO: Pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426648512s
Jun 17 04:47:11.531: INFO: Pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531638841s
Jun 17 04:47:13.636: INFO: Pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.636215489s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:13.636: INFO: Pod "pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18" satisfied condition "Succeeded or Failed"
Jun 17 04:47:13.754: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:13.971: INFO: Waiting for pod pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18 to disappear
Jun 17 04:47:14.075: INFO: Pod pod-bc309ef0-355b-46d5-964f-ea5fb1ed6d18 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.128 seconds][0m
[sig-storage] EmptyDir volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:14.299: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 57 lines ...
[36mDriver emptydir doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] Server request timeout
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:13.598: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename request-timeout
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:14.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "request-timeout-8838" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":5,"skipped":21,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:14.688: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 204 lines ...
Jun 17 04:47:00.510: INFO: PersistentVolumeClaim pvc-sg558 found but phase is Pending instead of Bound.
Jun 17 04:47:02.617: INFO: PersistentVolumeClaim pvc-sg558 found and phase=Bound (4.320983129s)
Jun 17 04:47:02.617: INFO: Waiting up to 3m0s for PersistentVolume local-t9brl to have phase Bound
Jun 17 04:47:02.724: INFO: PersistentVolume local-t9brl found and phase=Bound (106.733026ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-jlw4
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:47:03.045: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-jlw4" in namespace "volume-5918" to be "Succeeded or Failed"
Jun 17 04:47:03.152: INFO: Pod "exec-volume-test-preprovisionedpv-jlw4": Phase="Pending", Reason="", readiness=false. Elapsed: 107.460654ms
Jun 17 04:47:05.259: INFO: Pod "exec-volume-test-preprovisionedpv-jlw4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214384626s
Jun 17 04:47:07.370: INFO: Pod "exec-volume-test-preprovisionedpv-jlw4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324811005s
Jun 17 04:47:09.477: INFO: Pod "exec-volume-test-preprovisionedpv-jlw4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432433638s
Jun 17 04:47:11.585: INFO: Pod "exec-volume-test-preprovisionedpv-jlw4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.540522951s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:11.586: INFO: Pod "exec-volume-test-preprovisionedpv-jlw4" satisfied condition "Succeeded or Failed"
Jun 17 04:47:11.695: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-jlw4 container exec-container-preprovisionedpv-jlw4: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:11.921: INFO: Waiting for pod exec-volume-test-preprovisionedpv-jlw4 to disappear
Jun 17 04:47:12.027: INFO: Pod exec-volume-test-preprovisionedpv-jlw4 no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-jlw4
Jun 17 04:47:12.027: INFO: Deleting pod "exec-volume-test-preprovisionedpv-jlw4" in namespace "volume-5918"
... skipping 28 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":11,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 75 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 17 04:47:06.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5" in namespace "downward-api-7721" to be "Succeeded or Failed"
Jun 17 04:47:06.401: INFO: Pod "downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 105.540767ms
Jun 17 04:47:08.508: INFO: Pod "downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212722178s
Jun 17 04:47:10.615: INFO: Pod "downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319514009s
Jun 17 04:47:12.722: INFO: Pod "downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426716087s
Jun 17 04:47:14.828: INFO: Pod "downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.532600092s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:14.828: INFO: Pod "downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5" satisfied condition "Succeeded or Failed"
Jun 17 04:47:14.934: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:15.155: INFO: Waiting for pod downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5 to disappear
Jun 17 04:47:15.260: INFO: Pod downwardapi-volume-d5f58ef4-be75-43c1-821a-65e518a71dc5 no longer exists
[AfterEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:10.033 seconds][0m
[sig-storage] Downward API volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should provide container's cpu limit [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:15.486: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 11 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:23.071: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 47 lines ...
Jun 17 04:46:59.529: INFO: PersistentVolumeClaim pvc-hnx6s found but phase is Pending instead of Bound.
Jun 17 04:47:01.639: INFO: PersistentVolumeClaim pvc-hnx6s found and phase=Bound (14.872354795s)
Jun 17 04:47:01.640: INFO: Waiting up to 3m0s for PersistentVolume local-lvcws to have phase Bound
Jun 17 04:47:01.750: INFO: PersistentVolume local-lvcws found and phase=Bound (110.342609ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-7cvr
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:47:02.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7cvr" in namespace "provisioning-6669" to be "Succeeded or Failed"
Jun 17 04:47:02.184: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr": Phase="Pending", Reason="", readiness=false. Elapsed: 106.8146ms
Jun 17 04:47:04.292: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214409634s
Jun 17 04:47:06.399: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321859371s
Jun 17 04:47:08.507: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429466806s
Jun 17 04:47:10.615: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537472345s
Jun 17 04:47:12.723: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.645334715s
Jun 17 04:47:14.833: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.754953414s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:14.833: INFO: Pod "pod-subpath-test-preprovisionedpv-7cvr" satisfied condition "Succeeded or Failed"
Jun 17 04:47:14.940: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7cvr container test-container-subpath-preprovisionedpv-7cvr: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:15.162: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7cvr to disappear
Jun 17 04:47:15.269: INFO: Pod pod-subpath-test-preprovisionedpv-7cvr no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-7cvr
Jun 17 04:47:15.270: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7cvr" in namespace "provisioning-6669"
... skipping 21 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":15,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:16.847: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 35 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:46:47.145: INFO: >>> kubeConfig: /root/.kube/config
... skipping 7 lines ...
Jun 17 04:46:47.896: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi}
[1mSTEP[0m: creating a StorageClass volume-expand-7052gsvfp
[1mSTEP[0m: creating a claim
Jun 17 04:46:48.002: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Expanding non-expandable pvc
Jun 17 04:46:48.219: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI}
Jun 17 04:46:48.429: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:46:50.641: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:46:52.642: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:46:54.649: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:46:56.641: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:46:58.640: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:00.640: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:02.639: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:04.641: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:06.640: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:08.640: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:10.641: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:12.639: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:14.639: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:16.644: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:18.641: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-7052gsvfp",
... // 3 identical fields
}
Jun 17 04:47:18.852: INFO: Error updating pvc awsz6cx7: PersistentVolumeClaim "awsz6cx7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 24 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (default fs)] volume-expand
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should not allow expansion of pvcs without AllowVolumeExpansion property
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":5,"skipped":47,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:19.405: INFO: Only supported for providers [vsphere] (not aws)
... skipping 179 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
One pod requesting one prebound PVC
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m
should be able to mount volume and write from pod1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:19.895: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 121 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
CSIStorageCapacity
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1336[0m
CSIStorageCapacity used, have capacity
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1379[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:23.975: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 79 lines ...
[32m• [SLOW TEST:41.322 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
works for multiple CRDs of different groups [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:24.986: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 90 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
When pod refers to non-existent ephemeral storage
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53[0m
should allow deletion of pod with invalid volume : secret
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:25.500: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 11 lines ...
[36mOnly supported for providers [gce gke] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:16.712: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 17 04:47:17.574: INFO: Waiting up to 5m0s for pod "security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719" in namespace "security-context-4971" to be "Succeeded or Failed"
Jun 17 04:47:17.680: INFO: Pod "security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719": Phase="Pending", Reason="", readiness=false. Elapsed: 106.185046ms
Jun 17 04:47:19.790: INFO: Pod "security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216161661s
Jun 17 04:47:21.948: INFO: Pod "security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374264703s
Jun 17 04:47:24.055: INFO: Pod "security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481731866s
Jun 17 04:47:26.163: INFO: Pod "security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.588913579s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:26.163: INFO: Pod "security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719" satisfied condition "Succeeded or Failed"
Jun 17 04:47:26.270: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:26.500: INFO: Waiting for pod security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719 to disappear
Jun 17 04:47:26.606: INFO: Pod security-context-34ee5eb8-5ba4-4b19-944e-d4d745955719 no longer exists
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:10.110 seconds][0m
[sig-node] Security Context
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m
should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:26.843: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:27.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "container-runtime-4213" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":5,"skipped":46,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:26.904: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail when exceeds active deadline
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:255
[1mSTEP[0m: Creating a job
[1mSTEP[0m: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:29.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "job-866" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":5,"skipped":26,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:30.178: INFO: Only supported for providers [gce gke] (not aws)
... skipping 43 lines ...
Jun 17 04:47:14.133: INFO: PersistentVolumeClaim pvc-scsz8 found but phase is Pending instead of Bound.
Jun 17 04:47:16.240: INFO: PersistentVolumeClaim pvc-scsz8 found and phase=Bound (6.424521934s)
Jun 17 04:47:16.240: INFO: Waiting up to 3m0s for PersistentVolume local-pkgmh to have phase Bound
Jun 17 04:47:16.345: INFO: PersistentVolume local-pkgmh found and phase=Bound (105.428087ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vqpl
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:47:16.670: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vqpl" in namespace "provisioning-1986" to be "Succeeded or Failed"
Jun 17 04:47:16.776: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Pending", Reason="", readiness=false. Elapsed: 105.402887ms
Jun 17 04:47:18.882: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211827611s
Jun 17 04:47:20.988: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318210136s
Jun 17 04:47:23.095: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42431544s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:23.095: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl" satisfied condition "Succeeded or Failed"
Jun 17 04:47:23.200: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vqpl container test-container-subpath-preprovisionedpv-vqpl: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:23.421: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vqpl to disappear
Jun 17 04:47:23.527: INFO: Pod pod-subpath-test-preprovisionedpv-vqpl no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vqpl
Jun 17 04:47:23.527: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vqpl" in namespace "provisioning-1986"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vqpl
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:47:23.739: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vqpl" in namespace "provisioning-1986" to be "Succeeded or Failed"
Jun 17 04:47:23.845: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Pending", Reason="", readiness=false. Elapsed: 105.409824ms
Jun 17 04:47:25.951: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211684434s
Jun 17 04:47:28.056: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317094904s
Jun 17 04:47:30.163: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424265535s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:30.163: INFO: Pod "pod-subpath-test-preprovisionedpv-vqpl" satisfied condition "Succeeded or Failed"
Jun 17 04:47:30.269: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vqpl container test-container-subpath-preprovisionedpv-vqpl: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:30.506: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vqpl to disappear
Jun 17 04:47:30.613: INFO: Pod pod-subpath-test-preprovisionedpv-vqpl no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vqpl
Jun 17 04:47:30.613: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vqpl" in namespace "provisioning-1986"
... skipping 21 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:32.118: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 44 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m
Delete Grace Period
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:55[0m
should be submitted and removed
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:66[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:36.891: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 66 lines ...
[32m• [SLOW TEST:7.821 seconds][0m
[sig-node] Events
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m
should be sent by kubelets and the scheduler about pods scheduling and running
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/events.go:39[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running ","total":-1,"completed":6,"skipped":39,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 155 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:38.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":7,"skipped":40,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:38.375: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 95 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:39.594: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 58 lines ...
[1mSTEP[0m: Destroying namespace "services-1459" for this suite.
[AfterEach] [sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":5,"skipped":36,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 27 lines ...
Jun 17 04:47:28.960: INFO: PersistentVolumeClaim pvc-cvvqj found but phase is Pending instead of Bound.
Jun 17 04:47:31.071: INFO: PersistentVolumeClaim pvc-cvvqj found and phase=Bound (10.644487589s)
Jun 17 04:47:31.071: INFO: Waiting up to 3m0s for PersistentVolume local-sm47h to have phase Bound
Jun 17 04:47:31.176: INFO: PersistentVolume local-sm47h found and phase=Bound (105.603348ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-txw7
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:47:31.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-txw7" in namespace "provisioning-7375" to be "Succeeded or Failed"
Jun 17 04:47:31.601: INFO: Pod "pod-subpath-test-preprovisionedpv-txw7": Phase="Pending", Reason="", readiness=false. Elapsed: 105.812263ms
Jun 17 04:47:33.708: INFO: Pod "pod-subpath-test-preprovisionedpv-txw7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21315608s
Jun 17 04:47:35.814: INFO: Pod "pod-subpath-test-preprovisionedpv-txw7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319222398s
Jun 17 04:47:37.922: INFO: Pod "pod-subpath-test-preprovisionedpv-txw7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.426466006s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:37.922: INFO: Pod "pod-subpath-test-preprovisionedpv-txw7" satisfied condition "Succeeded or Failed"
Jun 17 04:47:38.028: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-txw7 container test-container-subpath-preprovisionedpv-txw7: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:38.248: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-txw7 to disappear
Jun 17 04:47:38.353: INFO: Pod pod-subpath-test-preprovisionedpv-txw7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-txw7
Jun 17 04:47:38.353: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-txw7" in namespace "provisioning-7375"
... skipping 30 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:41.270: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 136 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
[1mSTEP[0m: Setting up data
[It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating pod pod-subpath-test-downwardapi-xvqd
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:47:15.954: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xvqd" in namespace "subpath-1775" to be "Succeeded or Failed"
Jun 17 04:47:16.060: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Pending", Reason="", readiness=false. Elapsed: 106.573486ms
Jun 17 04:47:18.175: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221005262s
Jun 17 04:47:20.283: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329045356s
Jun 17 04:47:22.395: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440935615s
Jun 17 04:47:24.502: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Running", Reason="", readiness=true. Elapsed: 8.548439019s
Jun 17 04:47:26.609: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Running", Reason="", readiness=true. Elapsed: 10.655230785s
... skipping 3 lines ...
Jun 17 04:47:35.045: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Running", Reason="", readiness=true. Elapsed: 19.091627843s
Jun 17 04:47:37.154: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Running", Reason="", readiness=true. Elapsed: 21.200204537s
Jun 17 04:47:39.261: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Running", Reason="", readiness=true. Elapsed: 23.307367981s
Jun 17 04:47:41.369: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Running", Reason="", readiness=true. Elapsed: 25.415035955s
Jun 17 04:47:43.476: INFO: Pod "pod-subpath-test-downwardapi-xvqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.522229816s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:43.476: INFO: Pod "pod-subpath-test-downwardapi-xvqd" satisfied condition "Succeeded or Failed"
Jun 17 04:47:43.583: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-subpath-test-downwardapi-xvqd container test-container-subpath-downwardapi-xvqd: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:43.802: INFO: Waiting for pod pod-subpath-test-downwardapi-xvqd to disappear
Jun 17 04:47:43.909: INFO: Pod pod-subpath-test-downwardapi-xvqd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-downwardapi-xvqd
Jun 17 04:47:43.909: INFO: Deleting pod "pod-subpath-test-downwardapi-xvqd" in namespace "subpath-1775"
... skipping 8 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m
should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:44.259: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 43 lines ...
Jun 17 04:47:14.445: INFO: PersistentVolumeClaim pvc-fqj6t found but phase is Pending instead of Bound.
Jun 17 04:47:16.551: INFO: PersistentVolumeClaim pvc-fqj6t found and phase=Bound (6.423882311s)
Jun 17 04:47:16.551: INFO: Waiting up to 3m0s for PersistentVolume local-tkqf2 to have phase Bound
Jun 17 04:47:16.657: INFO: PersistentVolume local-tkqf2 found and phase=Bound (105.245336ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-n6td
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:47:16.974: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n6td" in namespace "provisioning-1150" to be "Succeeded or Failed"
Jun 17 04:47:17.080: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Pending", Reason="", readiness=false. Elapsed: 105.454223ms
Jun 17 04:47:19.186: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211744492s
Jun 17 04:47:21.292: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=true. Elapsed: 4.317852546s
Jun 17 04:47:23.399: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=true. Elapsed: 6.424070882s
Jun 17 04:47:25.505: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=true. Elapsed: 8.53055354s
Jun 17 04:47:27.612: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=true. Elapsed: 10.637440989s
... skipping 2 lines ...
Jun 17 04:47:33.943: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=true. Elapsed: 16.968719041s
Jun 17 04:47:36.049: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=true. Elapsed: 19.074857552s
Jun 17 04:47:38.155: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=true. Elapsed: 21.180653295s
Jun 17 04:47:40.262: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Running", Reason="", readiness=false. Elapsed: 23.287264967s
Jun 17 04:47:42.369: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.394041755s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:42.369: INFO: Pod "pod-subpath-test-preprovisionedpv-n6td" satisfied condition "Succeeded or Failed"
Jun 17 04:47:42.474: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-n6td container test-container-subpath-preprovisionedpv-n6td: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:42.693: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n6td to disappear
Jun 17 04:47:42.798: INFO: Pod pod-subpath-test-preprovisionedpv-n6td no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-n6td
Jun 17 04:47:42.798: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n6td" in namespace "provisioning-1150"
... skipping 21 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":34,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:44.314: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 96 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499[0m
running a failing command
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:519[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":9,"skipped":64,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 92 lines ...
Jun 17 04:47:37.870: INFO: Waiting for pod aws-client to disappear
Jun 17 04:47:37.976: INFO: Pod aws-client no longer exists
[1mSTEP[0m: cleaning the environment after aws
[1mSTEP[0m: Deleting pv and pvc
Jun 17 04:47:37.976: INFO: Deleting PersistentVolumeClaim "pvc-zg99d"
Jun 17 04:47:38.082: INFO: Deleting PersistentVolume "aws-vhxfl"
Jun 17 04:47:38.761: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0908f8d6bbf5549f2", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0908f8d6bbf5549f2 is currently attached to i-00cb91d9735ab5447
status code: 400, request id: 79d75d7d-5329-4c70-a247-1b4026892e1e
Jun 17 04:47:44.351: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0908f8d6bbf5549f2".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:44.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-7210" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (ext4)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should store data
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":4,"skipped":48,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 32 lines ...
[32m• [SLOW TEST:25.940 seconds][0m
[sig-network] Services
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should be able to change the type from NodePort to ExternalName [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 33 lines ...
[32m• [SLOW TEST:32.570 seconds][0m
[sig-storage] PVC Protection
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
Verify that PVC in active use by a pod is not removed immediately
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":6,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:47.439: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:47.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "disruption-4347" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":4,"skipped":18,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:47.828: INFO: Only supported for providers [azure] (not aws)
... skipping 24 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 17 04:47:45.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9" in namespace "downward-api-5467" to be "Succeeded or Failed"
Jun 17 04:47:45.619: INFO: Pod "downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9": Phase="Pending", Reason="", readiness=false. Elapsed: 187.677828ms
Jun 17 04:47:47.727: INFO: Pod "downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295220527s
Jun 17 04:47:49.832: INFO: Pod "downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.400053853s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:49.832: INFO: Pod "downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9" satisfied condition "Succeeded or Failed"
Jun 17 04:47:49.936: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:50.155: INFO: Waiting for pod downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9 to disappear
Jun 17 04:47:50.261: INFO: Pod downwardapi-volume-fd0ef417-182d-4ff6-8d17-652c7a8bdad9 no longer exists
[AfterEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:5.889 seconds][0m
[sig-storage] Downward API volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should provide container's cpu request [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 94 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
Two pods mounting a local volume at the same time
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m
should be able to write from pod1 and read from pod2
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":93,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 35 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474[0m
that expects NO client request
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484[0m
should support a client that connects, sends DATA, and disconnects
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":6,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 29 lines ...
[32m• [SLOW TEST:8.179 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
listing validating webhooks should work [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":7,"skipped":62,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:55.765: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m
With a server listening on localhost
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474[0m
should support forwarding over websockets
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":10,"skipped":66,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:56.253: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 295 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
CSI FSGroupPolicy [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1638[0m
should modify fsGroup if fsGroupPolicy=default
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1662[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":3,"skipped":10,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 48 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
Verify if offline PVC expansion works
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":17,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:58.299: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 116 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:47:59.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-6326" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":11,"skipped":73,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:47:59.512: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 287 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support two pods which have the same volume definition
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:214[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":5,"skipped":50,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 45 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m
Kubectl apply
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:816[0m
apply set/view last-applied
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:851[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":7,"skipped":42,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:00.809: INFO: Only supported for providers [gce gke] (not aws)
... skipping 173 lines ...
[32m• [SLOW TEST:60.969 seconds][0m
[sig-api-machinery] Garbage collector
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should delete jobs and pods created by cronjob
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1143[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":5,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 126 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":5,"skipped":31,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:58.245: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 17 04:47:59.094: INFO: Waiting up to 5m0s for pod "security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f" in namespace "security-context-7650" to be "Succeeded or Failed"
Jun 17 04:47:59.200: INFO: Pod "security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 105.641236ms
Jun 17 04:48:01.308: INFO: Pod "security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213858938s
Jun 17 04:48:03.416: INFO: Pod "security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321333902s
Jun 17 04:48:05.523: INFO: Pod "security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42838011s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:05.523: INFO: Pod "security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f" satisfied condition "Succeeded or Failed"
Jun 17 04:48:05.628: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:05.855: INFO: Waiting for pod security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f to disappear
Jun 17 04:48:05.960: INFO: Pod security-context-cbf22b1e-2c10-4268-896c-8f2139e27f9f no longer exists
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:7.928 seconds][0m
[sig-node] Security Context
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m
should support seccomp unconfined on the container [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:06.186: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 64 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
[1mSTEP[0m: Creating configMap with name configmap-test-volume-b2e17bbe-3e5c-4e1f-b4bf-7eea9b8a3a66
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 17 04:47:56.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9" in namespace "configmap-5823" to be "Succeeded or Failed"
Jun 17 04:47:56.873: INFO: Pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9": Phase="Pending", Reason="", readiness=false. Elapsed: 105.829083ms
Jun 17 04:47:58.980: INFO: Pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212832836s
Jun 17 04:48:01.086: INFO: Pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319386702s
Jun 17 04:48:03.195: INFO: Pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427965874s
Jun 17 04:48:05.301: INFO: Pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534389215s
Jun 17 04:48:07.407: INFO: Pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.640695669s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:07.408: INFO: Pod "pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9" satisfied condition "Succeeded or Failed"
Jun 17 04:48:07.513: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:07.733: INFO: Waiting for pod pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9 to disappear
Jun 17 04:48:07.838: INFO: Pod pod-configmaps-4e307182-2fb0-491e-bc80-5e71e25879f9 no longer exists
[AfterEach] [sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.248 seconds][0m
[sig-storage] ConfigMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":71,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:57.121: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:110
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 17 04:47:57.976: INFO: Waiting up to 5m0s for pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405" in namespace "downward-api-4293" to be "Succeeded or Failed"
Jun 17 04:47:58.081: INFO: Pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405": Phase="Pending", Reason="", readiness=false. Elapsed: 105.05228ms
Jun 17 04:48:00.209: INFO: Pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232771352s
Jun 17 04:48:02.315: INFO: Pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339008931s
Jun 17 04:48:04.421: INFO: Pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445269842s
Jun 17 04:48:06.530: INFO: Pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554451724s
Jun 17 04:48:08.637: INFO: Pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.660667167s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:08.637: INFO: Pod "downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405" satisfied condition "Succeeded or Failed"
Jun 17 04:48:08.746: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:08.962: INFO: Waiting for pod downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405 to disappear
Jun 17 04:48:09.067: INFO: Pod downward-api-e43d9dce-8132-4a2b-8b58-88cee9502405 no longer exists
[AfterEach] [sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:12.174 seconds][0m
[sig-node] Downward API
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:110[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:09.307: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 11 lines ...
[36mOnly supported for providers [vsphere] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:38.345: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Jun 17 04:47:44.003: INFO: PersistentVolumeClaim pvc-28lt9 found but phase is Pending instead of Bound.
Jun 17 04:47:46.112: INFO: PersistentVolumeClaim pvc-28lt9 found and phase=Bound (6.431577824s)
Jun 17 04:47:46.112: INFO: Waiting up to 3m0s for PersistentVolume aws-vw548 to have phase Bound
Jun 17 04:47:46.218: INFO: PersistentVolume aws-vw548 found and phase=Bound (106.650593ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-46z9
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:47:46.542: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-46z9" in namespace "volume-2007" to be "Succeeded or Failed"
Jun 17 04:47:46.649: INFO: Pod "exec-volume-test-preprovisionedpv-46z9": Phase="Pending", Reason="", readiness=false. Elapsed: 106.717796ms
Jun 17 04:47:48.767: INFO: Pod "exec-volume-test-preprovisionedpv-46z9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224166756s
Jun 17 04:47:50.875: INFO: Pod "exec-volume-test-preprovisionedpv-46z9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331988224s
Jun 17 04:47:52.983: INFO: Pod "exec-volume-test-preprovisionedpv-46z9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440461384s
Jun 17 04:47:55.091: INFO: Pod "exec-volume-test-preprovisionedpv-46z9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548398875s
Jun 17 04:47:57.200: INFO: Pod "exec-volume-test-preprovisionedpv-46z9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.657641822s
[1mSTEP[0m: Saw pod success
Jun 17 04:47:57.200: INFO: Pod "exec-volume-test-preprovisionedpv-46z9" satisfied condition "Succeeded or Failed"
Jun 17 04:47:57.308: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-46z9 container exec-container-preprovisionedpv-46z9: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:47:57.533: INFO: Waiting for pod exec-volume-test-preprovisionedpv-46z9 to disappear
Jun 17 04:47:57.642: INFO: Pod exec-volume-test-preprovisionedpv-46z9 no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-46z9
Jun 17 04:47:57.642: INFO: Deleting pod "exec-volume-test-preprovisionedpv-46z9" in namespace "volume-2007"
[1mSTEP[0m: Deleting pv and pvc
Jun 17 04:47:57.749: INFO: Deleting PersistentVolumeClaim "pvc-28lt9"
Jun 17 04:47:57.856: INFO: Deleting PersistentVolume "aws-vw548"
Jun 17 04:47:58.153: INFO: Couldn't delete PD "aws://eu-west-1a/vol-09b5b024a54b6cd1e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09b5b024a54b6cd1e is currently attached to i-00cb91d9735ab5447
status code: 400, request id: 277ca5d9-5615-4005-82f8-538ac66f6a00
Jun 17 04:48:03.744: INFO: Couldn't delete PD "aws://eu-west-1a/vol-09b5b024a54b6cd1e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09b5b024a54b6cd1e is currently attached to i-00cb91d9735ab5447
status code: 400, request id: fdb55932-122a-4b86-8aea-701d86cd7b69
Jun 17 04:48:09.414: INFO: Successfully deleted PD "aws://eu-west-1a/vol-09b5b024a54b6cd1e".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:48:09.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-2007" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (ext4)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":21,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:48:02.197: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 17 04:48:03.046: INFO: Waiting up to 5m0s for pod "security-context-1573d385-34db-4403-a045-630f113211e6" in namespace "security-context-6667" to be "Succeeded or Failed"
Jun 17 04:48:03.152: INFO: Pod "security-context-1573d385-34db-4403-a045-630f113211e6": Phase="Pending", Reason="", readiness=false. Elapsed: 105.782419ms
Jun 17 04:48:05.259: INFO: Pod "security-context-1573d385-34db-4403-a045-630f113211e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212668867s
Jun 17 04:48:07.366: INFO: Pod "security-context-1573d385-34db-4403-a045-630f113211e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319429704s
Jun 17 04:48:09.476: INFO: Pod "security-context-1573d385-34db-4403-a045-630f113211e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.430164801s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:09.476: INFO: Pod "security-context-1573d385-34db-4403-a045-630f113211e6" satisfied condition "Succeeded or Failed"
Jun 17 04:48:09.582: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod security-context-1573d385-34db-4403-a045-630f113211e6 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:09.802: INFO: Waiting for pod security-context-1573d385-34db-4403-a045-630f113211e6 to disappear
Jun 17 04:48:09.908: INFO: Pod security-context-1573d385-34db-4403-a045-630f113211e6 no longer exists
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:7.925 seconds][0m
[sig-node] Security Context
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m
should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":61,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Jun 17 04:48:00.348: INFO: PersistentVolumeClaim pvc-mvjhm found but phase is Pending instead of Bound.
Jun 17 04:48:02.454: INFO: PersistentVolumeClaim pvc-mvjhm found and phase=Bound (12.740664268s)
Jun 17 04:48:02.454: INFO: Waiting up to 3m0s for PersistentVolume local-rkgvb to have phase Bound
Jun 17 04:48:02.563: INFO: PersistentVolume local-rkgvb found and phase=Bound (108.652137ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-qtjq
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:48:02.879: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qtjq" in namespace "provisioning-3652" to be "Succeeded or Failed"
Jun 17 04:48:02.984: INFO: Pod "pod-subpath-test-preprovisionedpv-qtjq": Phase="Pending", Reason="", readiness=false. Elapsed: 104.773119ms
Jun 17 04:48:05.090: INFO: Pod "pod-subpath-test-preprovisionedpv-qtjq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211071572s
Jun 17 04:48:07.195: INFO: Pod "pod-subpath-test-preprovisionedpv-qtjq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316387137s
Jun 17 04:48:09.302: INFO: Pod "pod-subpath-test-preprovisionedpv-qtjq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.422643568s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:09.302: INFO: Pod "pod-subpath-test-preprovisionedpv-qtjq" satisfied condition "Succeeded or Failed"
Jun 17 04:48:09.406: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-qtjq container test-container-subpath-preprovisionedpv-qtjq: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:09.642: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qtjq to disappear
Jun 17 04:48:09.747: INFO: Pod pod-subpath-test-preprovisionedpv-qtjq no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-qtjq
Jun 17 04:48:09.747: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qtjq" in namespace "provisioning-3652"
... skipping 61 lines ...
[32m• [SLOW TEST:11.987 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
listing mutating webhooks should work [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":8,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:12.935: INFO: Only supported for providers [azure] (not aws)
... skipping 35 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m
[36mDriver emptydir doesn't support PreprovisionedPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":10,"skipped":96,"failed":0}
[BeforeEach] [sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:57.679: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename dns
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
[32m• [SLOW TEST:15.610 seconds][0m
[sig-network] DNS
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should support configurable pod resolv.conf
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":11,"skipped":96,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 85 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
One pod requesting one prebound PVC
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m
should be able to mount volume and read from pod1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":99,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":59,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:48:11.310: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename gc
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:48:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "gc-6905" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 10 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:48:14.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-2280" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":9,"skipped":65,"failed":0}
[BeforeEach] [sig-node] PodTemplates
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:48:14.277: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename podtemplate
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:48:15.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "podtemplate-2981" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":10,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:48:15.964: INFO: >>> kubeConfig: /root/.kube/config
... skipping 50 lines ...
[32m• [SLOW TEST:29.828 seconds][0m
[sig-api-machinery] ResourceQuota
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a configMap. [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 3 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 17 04:48:14.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98" in namespace "projected-3763" to be "Succeeded or Failed"
Jun 17 04:48:14.255: INFO: Pod "downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98": Phase="Pending", Reason="", readiness=false. Elapsed: 104.033744ms
Jun 17 04:48:16.360: INFO: Pod "downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209690985s
Jun 17 04:48:18.466: INFO: Pod "downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31524083s
Jun 17 04:48:20.587: INFO: Pod "downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.436702279s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:20.587: INFO: Pod "downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98" satisfied condition "Succeeded or Failed"
Jun 17 04:48:20.693: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:20.911: INFO: Waiting for pod downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98 to disappear
Jun 17 04:48:21.019: INFO: Pod downwardapi-volume-ec1cd685-d83b-4b88-97d3-788394ddfc98 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:7.917 seconds][0m
[sig-storage] Projected downwardAPI
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should provide container's memory limit [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":98,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:21.263: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-59971639-9b99-4cdf-92ff-cd2798e0c631
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 17 04:48:17.948: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054" in namespace "projected-5724" to be "Succeeded or Failed"
Jun 17 04:48:18.055: INFO: Pod "pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054": Phase="Pending", Reason="", readiness=false. Elapsed: 107.319399ms
Jun 17 04:48:20.164: INFO: Pod "pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21614035s
Jun 17 04:48:22.272: INFO: Pod "pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.324227377s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:22.272: INFO: Pod "pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054" satisfied condition "Succeeded or Failed"
Jun 17 04:48:22.380: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054 container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:22.607: INFO: Waiting for pod pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054 to disappear
Jun 17 04:48:22.714: INFO: Pod pod-projected-secrets-28339473-86f2-445d-9e15-8d00cf0a8054 no longer exists
[AfterEach] [sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:5.959 seconds][0m
[sig-storage] Projected secret
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":69,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:22.952: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 106 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452[0m
that expects NO client request
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462[0m
should support a client that connects, sends DATA, and disconnects
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":7,"skipped":65,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-windows] Hybrid cluster network
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jun 17 04:48:26.252: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 109 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23[0m
Granular Checks: Pods
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30[0m
should function for intra-pod communication: udp [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 66 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":47,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Generated clientset
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 13 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:48:31.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "clientset-6413" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":6,"skipped":48,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:59.968: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 77 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
Two pods mounting a local volume at the same time
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m
should be able to write from pod1 and read from pod2
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:32.947: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 49 lines ...
Jun 17 04:48:29.510: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Jun 17 04:48:29.510: INFO: Running '/logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5146 describe pod agnhost-primary-jhdlk'
Jun 17 04:48:30.122: INFO: stderr: ""
Jun 17 04:48:30.122: INFO: stdout: "Name: agnhost-primary-jhdlk\nNamespace: kubectl-5146\nPriority: 0\nNode: ip-172-20-50-49.eu-west-1.compute.internal/172.20.50.49\nStart Time: Fri, 17 Jun 2022 04:48:22 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 100.96.1.207\nIPs:\n IP: 100.96.1.207\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://313289ae7cbad60a2b5a2ab7081eb537cbc985147f716b3043e0886c8cb48e4e\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 17 Jun 2022 04:48:24 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ssvbn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-ssvbn:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-5146/agnhost-primary-jhdlk to ip-172-20-50-49.eu-west-1.compute.internal\n Normal Pulled 6s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6s kubelet Created container agnhost-primary\n Normal Started 6s kubelet Started container agnhost-primary\n"
Jun 17 04:48:30.122: INFO: Running '/logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5146 describe rc agnhost-primary'
Jun 17 04:48:30.838: INFO: stderr: ""
Jun 17 04:48:30.838: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5146\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-primary-jhdlk\n"
Jun 17 04:48:30.838: INFO: Running '/logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5146 describe service agnhost-primary'
Jun 17 04:48:31.549: INFO: stderr: ""
Jun 17 04:48:31.550: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5146\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.70.75.237\nIPs: 100.70.75.237\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.1.207:6379\nSession Affinity: None\nEvents: <none>\n"
Jun 17 04:48:31.655: INFO: Running '/logs/artifacts/12f3fd81-edf7-11ec-aa21-eaae59a12ce8/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5146 describe node ip-172-20-38-101.eu-west-1.compute.internal'
Jun 17 04:48:32.812: INFO: stderr: ""
Jun 17 04:48:32.812: INFO: stdout: "Name: ip-172-20-38-101.eu-west-1.compute.internal\nRoles: node\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=t3.medium\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=eu-west-1\n failure-domain.beta.kubernetes.io/zone=eu-west-1a\n kops.k8s.io/instancegroup=nodes-eu-west-1a\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-172-20-38-101.eu-west-1.compute.internal\n kubernetes.io/os=linux\n kubernetes.io/role=node\n node-role.kubernetes.io/node=\n node.kubernetes.io/instance-type=t3.medium\n topology.ebs.csi.aws.com/zone=eu-west-1a\n topology.hostpath.csi/node=ip-172-20-38-101.eu-west-1.compute.internal\n topology.kubernetes.io/region=eu-west-1\n topology.kubernetes.io/zone=eu-west-1a\nAnnotations: csi.volume.kubernetes.io/nodeid: {\"ebs.csi.aws.com\":\"i-00cb91d9735ab5447\"}\n io.cilium.network.ipv4-cilium-host: 100.96.4.73\n io.cilium.network.ipv4-health-ip: 100.96.4.239\n io.cilium.network.ipv4-pod-cidr: 100.96.4.0/24\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 17 Jun 2022 04:41:53 +0000\nTaints: <none>\nUnschedulable: false\nLease:\n HolderIdentity: ip-172-20-38-101.eu-west-1.compute.internal\n AcquireTime: <unset>\n RenewTime: Fri, 17 Jun 2022 04:48:32 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 17 Jun 2022 04:42:23 +0000 Fri, 17 Jun 2022 04:42:23 +0000 CiliumIsUp Cilium is running on this node\n MemoryPressure False Fri, 17 Jun 2022 04:48:09 +0000 Fri, 17 Jun 2022 04:41:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 17 Jun 2022 04:48:09 +0000 Fri, 17 Jun 2022 04:41:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 17 Jun 2022 04:48:09 +0000 Fri, 17 Jun 2022 04:41:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 17 Jun 2022 04:48:09 +0000 Fri, 17 Jun 2022 04:42:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.20.38.101\n ExternalIP: 34.255.10.114\n Hostname: ip-172-20-38-101.eu-west-1.compute.internal\n InternalDNS: ip-172-20-38-101.eu-west-1.compute.internal\n ExternalDNS: ec2-34-255-10-114.eu-west-1.compute.amazonaws.com\nCapacity:\n cpu: 2\n ephemeral-storage: 50319340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 3955476Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 46374303668\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 3853076Ki\n pods: 110\nSystem Info:\n Machine ID: 2d99ef724dce45369047869bf2504a0b\n System UUID: ec2bdae3-6eb3-4b21-8753-9ee654bfad56\n Boot ID: bd5e0e78-d756-4a9a-a8bf-a857e14dbe86\n Kernel Version: 5.10.109-104.500.amzn2.x86_64\n OS Image: Amazon Linux 2\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.17\n Kubelet Version: v1.23.8\n Kube-Proxy Version: v1.23.8\nPodCIDR: 100.96.4.0/24\nPodCIDRs: 100.96.4.0/24\nProviderID: aws:///eu-west-1a/i-00cb91d9735ab5447\nNon-terminated Pods: (15 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n ephemeral-2282 inline-volume-tester-qz76x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 82s\n kube-system cilium-jgdmh 100m (5%) 0 (0%) 128Mi (3%) 100Mi (2%) 6m39s\n kube-system coredns-5556cb978d-tswwt 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 6m1s\n kube-system ebs-csi-node-sj72b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m39s\n nettest-9388 netserver-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23s\n persistent-local-volumes-test-1807 hostexec-ip-172-20-38-101.eu-west-1.compute.internal-cdskj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32s\n persistent-local-volumes-test-1807 pod-1108a1be-f9c3-4ecc-8f1c-68b2c45bb785 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22s\n persistent-local-volumes-test-1807 pod-46269334-e2af-4a07-b601-eb4630dce3b3 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10s\n pod-network-test-6117 netserver-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41s\n pod-network-test-6117 test-container-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16s\n provisioning-4397 hostexec-ip-172-20-38-101.eu-west-1.compute.internal-tvlb8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31s\n provisioning-4397 pod-subpath-test-preprovisionedpv-nt45 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s\n services-7762 affinity-clusterip-ppvbt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14s\n statefulset-3212 test-ss-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56s\n statefulset-4975 ss2-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 200m (10%) 0 (0%)\n memory 198Mi (5%) 270Mi (7%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 7m kubelet Starting kubelet.\n Normal NodeHasSufficientMemory 7m (x2 over 7m) kubelet Node ip-172-20-38-101.eu-west-1.compute.internal status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 7m (x2 over 7m) kubelet Node ip-172-20-38-101.eu-west-1.compute.internal status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 7m (x2 over 7m) kubelet Node ip-172-20-38-101.eu-west-1.compute.internal status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 7m kubelet Updated Node Allocatable limit across pods\n Normal NodeReady 6m19s kubelet Node ip-172-20-38-101.eu-west-1.compute.internal status is now: NodeReady\n"
... skipping 11 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m
Kubectl describe
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1109[0m
should check if kubectl describe prints relevant information for rc and pods [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":13,"skipped":109,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:33.797: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-1dfa08ff-ebfb-4453-a509-b4467e99fe70
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 17 04:48:27.243: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3" in namespace "projected-442" to be "Succeeded or Failed"
Jun 17 04:48:27.349: INFO: Pod "pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3": Phase="Pending", Reason="", readiness=false. Elapsed: 105.868329ms
Jun 17 04:48:29.455: INFO: Pod "pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212070704s
Jun 17 04:48:31.567: INFO: Pod "pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324177025s
Jun 17 04:48:33.674: INFO: Pod "pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430653271s
Jun 17 04:48:35.784: INFO: Pod "pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.54038334s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:35.784: INFO: Pod "pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3" satisfied condition "Succeeded or Failed"
Jun 17 04:48:35.890: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:36.111: INFO: Waiting for pod pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3 to disappear
Jun 17 04:48:36.217: INFO: Pod pod-projected-configmaps-0e13cef1-b339-420c-bc30-b1931a75d3c3 no longer exists
[AfterEach] [sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:10.141 seconds][0m
[sig-storage] Projected configMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":72,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 101 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
(Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":4,"skipped":21,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 124 lines ...
[1mSTEP[0m: Building a namespace api object, basename var-expansion
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test substitution in container's command
Jun 17 04:48:34.668: INFO: Waiting up to 5m0s for pod "var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15" in namespace "var-expansion-6399" to be "Succeeded or Failed"
Jun 17 04:48:34.775: INFO: Pod "var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15": Phase="Pending", Reason="", readiness=false. Elapsed: 106.650809ms
Jun 17 04:48:36.880: INFO: Pod "var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211456005s
Jun 17 04:48:38.986: INFO: Pod "var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317281294s
Jun 17 04:48:41.090: INFO: Pod "var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.422029645s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:41.091: INFO: Pod "var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15" satisfied condition "Succeeded or Failed"
Jun 17 04:48:41.195: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:41.413: INFO: Waiting for pod var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15 to disappear
Jun 17 04:48:41.518: INFO: Pod var-expansion-132a5841-c36c-41e5-83f6-81ebc13b7d15 no longer exists
[AfterEach] [sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:7.900 seconds][0m
[sig-node] Variable Expansion
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should allow substituting values in a container's command [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":121,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:41.756: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 63 lines ...
[32m• [SLOW TEST:33.153 seconds][0m
[sig-network] Conntrack
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should be able to preserve UDP traffic when server pod cycles for a NodePort service
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":6,"skipped":22,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:42.823: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 80 lines ...
[32m• [SLOW TEST:127.561 seconds][0m
[sig-apps] StatefulSet
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1165[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled","total":-1,"completed":4,"skipped":34,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 52 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (filesystem volmode)] volumeMode
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:43.300: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 196 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (block volmode)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should store data
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":8,"skipped":51,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:43.720: INFO: Only supported for providers [vsphere] (not aws)
... skipping 79 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:252[0m
[36mOnly supported for providers [azure] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1576
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":9,"skipped":61,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:48:26.148: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename webhook
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
[32m• [SLOW TEST:17.715 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should be able to deny attaching pod [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":10,"skipped":61,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:43.884: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 118 lines ...
[32m• [SLOW TEST:27.028 seconds][0m
[sig-network] Services
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":22,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:44.740: INFO: Only supported for providers [gce gke] (not aws)
... skipping 26 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 17 04:48:38.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd" in namespace "projected-2252" to be "Succeeded or Failed"
Jun 17 04:48:38.729: INFO: Pod "downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 105.670888ms
Jun 17 04:48:40.836: INFO: Pod "downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212291366s
Jun 17 04:48:42.943: INFO: Pod "downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319667035s
Jun 17 04:48:45.049: INFO: Pod "downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.425298543s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:45.049: INFO: Pod "downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd" satisfied condition "Succeeded or Failed"
Jun 17 04:48:45.154: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:45.375: INFO: Waiting for pod downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd to disappear
Jun 17 04:48:45.481: INFO: Pod downwardapi-volume-d1df58f4-07c3-407a-a0d5-0da6233c9dcd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:7.924 seconds][0m
[sig-storage] Projected downwardAPI
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:45.779: INFO: Only supported for providers [openstack] (not aws)
... skipping 43 lines ...
Jun 17 04:48:14.730: INFO: PersistentVolumeClaim pvc-j2h42 found but phase is Pending instead of Bound.
Jun 17 04:48:16.835: INFO: PersistentVolumeClaim pvc-j2h42 found and phase=Bound (6.4330167s)
Jun 17 04:48:16.836: INFO: Waiting up to 3m0s for PersistentVolume local-wzl6b to have phase Bound
Jun 17 04:48:16.946: INFO: PersistentVolume local-wzl6b found and phase=Bound (110.505026ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-nt45
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:48:17.264: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nt45" in namespace "provisioning-4397" to be "Succeeded or Failed"
Jun 17 04:48:17.369: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Pending", Reason="", readiness=false. Elapsed: 105.141169ms
Jun 17 04:48:19.475: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211137322s
Jun 17 04:48:21.580: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316492829s
Jun 17 04:48:23.688: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424006553s
Jun 17 04:48:25.794: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529680913s
Jun 17 04:48:27.900: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Running", Reason="", readiness=true. Elapsed: 10.636265625s
... skipping 3 lines ...
Jun 17 04:48:36.325: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Running", Reason="", readiness=true. Elapsed: 19.061376312s
Jun 17 04:48:38.432: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Running", Reason="", readiness=true. Elapsed: 21.168043409s
Jun 17 04:48:40.538: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Running", Reason="", readiness=true. Elapsed: 23.274135413s
Jun 17 04:48:42.646: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Running", Reason="", readiness=true. Elapsed: 25.382627876s
Jun 17 04:48:44.753: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.489580746s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:44.753: INFO: Pod "pod-subpath-test-preprovisionedpv-nt45" satisfied condition "Succeeded or Failed"
Jun 17 04:48:44.859: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-nt45 container test-container-subpath-preprovisionedpv-nt45: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:45.078: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nt45 to disappear
Jun 17 04:48:45.183: INFO: Pod pod-subpath-test-preprovisionedpv-nt45 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-nt45
Jun 17 04:48:45.183: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nt45" in namespace "provisioning-4397"
... skipping 21 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":53,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:48:45.807: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename topology
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jun 17 04:48:46.653: INFO: found topology map[topology.kubernetes.io/zone:eu-west-1a]
Jun 17 04:48:46.653: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jun 17 04:48:46.653: INFO: Not enough topologies in cluster -- skipping
[1mSTEP[0m: Deleting pvc
[1mSTEP[0m: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: aws]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [It][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mNot enough topologies in cluster -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
[90m------------------------------[0m
... skipping 95 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (block volmode)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should store data
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:51.511: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 21 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 17 04:48:43.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0" in namespace "projected-3220" to be "Succeeded or Failed"
Jun 17 04:48:43.931: INFO: Pod "downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 105.764704ms
Jun 17 04:48:46.038: INFO: Pod "downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212353006s
Jun 17 04:48:48.145: INFO: Pod "downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319266188s
Jun 17 04:48:50.251: INFO: Pod "downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426023675s
Jun 17 04:48:52.358: INFO: Pod "downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.532619427s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:52.358: INFO: Pod "downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0" satisfied condition "Succeeded or Failed"
Jun 17 04:48:52.464: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:52.697: INFO: Waiting for pod downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0 to disappear
Jun 17 04:48:52.802: INFO: Pod downwardapi-volume-b82242c1-e472-4506-b622-4019c6bbf0e0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:10.048 seconds][0m
[sig-storage] Projected downwardAPI
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should provide container's cpu limit [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 56 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":52,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:48:53.192: INFO: Only supported for providers [gce gke] (not aws)
... skipping 164 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:48:53.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "request-timeout-7060" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":6,"skipped":49,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 102 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
Two pods mounting a local volume at the same time
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m
should be able to write from pod1 and read from pod2
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:48:55.435: INFO: >>> kubeConfig: /root/.kube/config
... skipping 148 lines ...
Jun 17 04:48:24.365: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vlvnm] to have phase Bound
Jun 17 04:48:24.470: INFO: PersistentVolumeClaim pvc-vlvnm found and phase=Bound (105.493702ms)
[1mSTEP[0m: Deleting the previously created pod
Jun 17 04:48:31.000: INFO: Deleting pod "pvc-volume-tester-c7trf" in namespace "csi-mock-volumes-9630"
Jun 17 04:48:31.107: INFO: Wait up to 5m0s for pod "pvc-volume-tester-c7trf" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 17 04:48:33.427: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1a2a62f4-6f5f-4330-ac66-9199d8333f5e/volumes/kubernetes.io~csi/pvc-de1541e3-976b-4774-8790-dbcdb3a873dd/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-c7trf
Jun 17 04:48:33.427: INFO: Deleting pod "pvc-volume-tester-c7trf" in namespace "csi-mock-volumes-9630"
[1mSTEP[0m: Deleting claim pvc-vlvnm
Jun 17 04:48:33.752: INFO: Waiting up to 2m0s for PersistentVolume pvc-de1541e3-976b-4774-8790-dbcdb3a873dd to get deleted
Jun 17 04:48:33.857: INFO: PersistentVolume pvc-de1541e3-976b-4774-8790-dbcdb3a873dd found and phase=Released (105.286453ms)
Jun 17 04:48:35.963: INFO: PersistentVolume pvc-de1541e3-976b-4774-8790-dbcdb3a873dd found and phase=Released (2.211652618s)
... skipping 47 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
CSIServiceAccountToken
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1576[0m
token should not be plumbed down when CSIDriver is not deployed
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1604[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":3,"skipped":26,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 51 lines ...
[32m• [SLOW TEST:26.081 seconds][0m
[sig-network] Services
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should serve multiport endpoints from pods [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":8,"skipped":54,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 23 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m
on terminated container
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134[0m
should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:00.386: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: creating secret secrets-4785/secret-test-fcbeebb1-901f-4cb1-8c38-b3936a171f1e
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 17 04:48:47.941: INFO: Waiting up to 5m0s for pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639" in namespace "secrets-4785" to be "Succeeded or Failed"
Jun 17 04:48:48.048: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639": Phase="Pending", Reason="", readiness=false. Elapsed: 106.248444ms
Jun 17 04:48:50.155: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213384064s
Jun 17 04:48:52.261: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319219253s
Jun 17 04:48:54.367: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425962525s
Jun 17 04:48:56.473: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532028729s
Jun 17 04:48:58.580: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638902509s
Jun 17 04:49:00.686: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.744888471s
[1mSTEP[0m: Saw pod success
Jun 17 04:49:00.686: INFO: Pod "pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639" satisfied condition "Succeeded or Failed"
Jun 17 04:49:00.796: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639 container env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:49:01.016: INFO: Waiting for pod pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639 to disappear
Jun 17 04:49:01.121: INFO: Pod pod-configmaps-484d7e95-c313-49ed-a0ec-60c890a72639 no longer exists
[AfterEach] [sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:14.350 seconds][0m
[sig-node] Secrets
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should be consumable via the environment [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:01.362: INFO: Only supported for providers [gce gke] (not aws)
... skipping 52 lines ...
Jun 17 04:48:32.488: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass provisioning-89152ltrx
[1mSTEP[0m: creating a claim
Jun 17 04:48:32.593: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-q6nn
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:48:32.917: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-q6nn" in namespace "provisioning-8915" to be "Succeeded or Failed"
Jun 17 04:48:33.022: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 104.996019ms
Jun 17 04:48:35.128: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211365573s
Jun 17 04:48:37.234: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316965706s
Jun 17 04:48:39.343: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425601507s
Jun 17 04:48:41.448: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531366966s
Jun 17 04:48:43.555: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638204941s
Jun 17 04:48:45.661: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743835866s
Jun 17 04:48:47.768: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.850484953s
Jun 17 04:48:49.873: INFO: Pod "pod-subpath-test-dynamicpv-q6nn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.956300599s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:49.874: INFO: Pod "pod-subpath-test-dynamicpv-q6nn" satisfied condition "Succeeded or Failed"
Jun 17 04:48:49.979: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-q6nn container test-container-subpath-dynamicpv-q6nn: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:50.201: INFO: Waiting for pod pod-subpath-test-dynamicpv-q6nn to disappear
Jun 17 04:48:50.306: INFO: Pod pod-subpath-test-dynamicpv-q6nn no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-q6nn
Jun 17 04:48:50.306: INFO: Deleting pod "pod-subpath-test-dynamicpv-q6nn" in namespace "provisioning-8915"
... skipping 19 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":54,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:01.521: INFO: Only supported for providers [gce gke] (not aws)
... skipping 53 lines ...
[32m• [SLOW TEST:7.603 seconds][0m
[sig-node] PrivilegedPod [NodeConformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should enable privileged commands [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":10,"skipped":89,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 44 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m
should validate Statefulset Status endpoints [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":7,"skipped":26,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
... skipping 10 lines ...
Jun 17 04:48:40.328: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass volume-931vx7bp
[1mSTEP[0m: creating a claim
Jun 17 04:48:40.434: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-zcjh
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:48:40.761: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-zcjh" in namespace "volume-931" to be "Succeeded or Failed"
Jun 17 04:48:40.867: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 105.717239ms
Jun 17 04:48:42.975: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21354005s
Jun 17 04:48:45.082: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320791448s
Jun 17 04:48:47.189: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427048615s
Jun 17 04:48:49.295: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533316186s
Jun 17 04:48:51.402: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640571553s
Jun 17 04:48:53.511: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.749893893s
Jun 17 04:48:55.618: INFO: Pod "exec-volume-test-dynamicpv-zcjh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.856800711s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:55.618: INFO: Pod "exec-volume-test-dynamicpv-zcjh" satisfied condition "Succeeded or Failed"
Jun 17 04:48:55.724: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod exec-volume-test-dynamicpv-zcjh container exec-container-dynamicpv-zcjh: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:48:55.955: INFO: Waiting for pod exec-volume-test-dynamicpv-zcjh to disappear
Jun 17 04:48:56.061: INFO: Pod exec-volume-test-dynamicpv-zcjh no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-zcjh
Jun 17 04:48:56.061: INFO: Deleting pod "exec-volume-test-dynamicpv-zcjh" in namespace "volume-931"
... skipping 17 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (ext4)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":100,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:07.167: INFO: Only supported for providers [openstack] (not aws)
... skipping 48 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-4bc3f78f-93ef-48ca-a4af-85f59a143d34
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 17 04:49:02.487: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c" in namespace "projected-8284" to be "Succeeded or Failed"
Jun 17 04:49:02.593: INFO: Pod "pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c": Phase="Pending", Reason="", readiness=false. Elapsed: 105.518374ms
Jun 17 04:49:04.701: INFO: Pod "pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213674646s
Jun 17 04:49:06.806: INFO: Pod "pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318841315s
Jun 17 04:49:08.911: INFO: Pod "pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424382759s
[1mSTEP[0m: Saw pod success
Jun 17 04:49:08.912: INFO: Pod "pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c" satisfied condition "Succeeded or Failed"
Jun 17 04:49:09.017: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:49:09.236: INFO: Waiting for pod pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c to disappear
Jun 17 04:49:09.343: INFO: Pod pod-projected-configmaps-cf71b05e-cf64-4eec-a408-7f46d72ced2c no longer exists
[AfterEach] [sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:8.019 seconds][0m
[sig-storage] Projected configMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":60,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:09.609: INFO: Only supported for providers [vsphere] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: vsphere]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mOnly supported for providers [vsphere] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
[90m------------------------------[0m
... skipping 76 lines ...
[32m• [SLOW TEST:6.621 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should mutate pod and apply defaults after mutation [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":11,"skipped":91,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:10.856: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 118 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
[1mSTEP[0m: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-kdd6
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:48:42.838: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kdd6" in namespace "subpath-7053" to be "Succeeded or Failed"
Jun 17 04:48:42.943: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Pending", Reason="", readiness=false. Elapsed: 105.072437ms
Jun 17 04:48:45.048: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209967128s
Jun 17 04:48:47.153: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315364402s
Jun 17 04:48:49.258: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42017319s
Jun 17 04:48:51.364: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Running", Reason="", readiness=true. Elapsed: 8.525765935s
Jun 17 04:48:53.469: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Running", Reason="", readiness=true. Elapsed: 10.631381856s
... skipping 3 lines ...
Jun 17 04:49:01.892: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Running", Reason="", readiness=true. Elapsed: 19.054559489s
Jun 17 04:49:03.997: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Running", Reason="", readiness=true. Elapsed: 21.159436393s
Jun 17 04:49:06.102: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Running", Reason="", readiness=true. Elapsed: 23.264592878s
Jun 17 04:49:08.208: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Running", Reason="", readiness=false. Elapsed: 25.370309989s
Jun 17 04:49:10.313: INFO: Pod "pod-subpath-test-configmap-kdd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.475686209s
[1mSTEP[0m: Saw pod success
Jun 17 04:49:10.314: INFO: Pod "pod-subpath-test-configmap-kdd6" satisfied condition "Succeeded or Failed"
Jun 17 04:49:10.418: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-configmap-kdd6 container test-container-subpath-configmap-kdd6: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:49:10.646: INFO: Waiting for pod pod-subpath-test-configmap-kdd6 to disappear
Jun 17 04:49:10.750: INFO: Pod pod-subpath-test-configmap-kdd6 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-kdd6
Jun 17 04:49:10.750: INFO: Deleting pod "pod-subpath-test-configmap-kdd6" in namespace "subpath-7053"
... skipping 8 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m
should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":15,"skipped":123,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:47:41.408: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename cronjob
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
[1mSTEP[0m: Creating an AllowConcurrent cronjob with custom history limit
[1mSTEP[0m: Ensuring a finished job exists
[1mSTEP[0m: Ensuring a finished job exists by listing jobs explicitly
[1mSTEP[0m: Ensuring this job and its pods does not exist anymore
[1mSTEP[0m: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
[1mSTEP[0m: Destroying namespace "cronjob-6714" for this suite.
[32m• [SLOW TEST:89.700 seconds][0m
[sig-apps] CronJob
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
should delete failed finished jobs with limit of one job
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":9,"skipped":91,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:11.120: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 148 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
One pod requesting one prebound PVC
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m
should be able to mount volume and read from pod1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:11.404: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 103 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m
Simple pod
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m
should contain last line of the log
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:623[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":9,"skipped":67,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 16 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
When creating a container with runAsNonRoot
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104[0m
should not run without a specified user ID
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":10,"skipped":99,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:20.426: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 83 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing directory
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jun 17 04:48:54.918: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 04:48:55.147: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2688" in namespace "provisioning-2688" to be "Succeeded or Failed"
Jun 17 04:48:55.252: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 105.38745ms
Jun 17 04:48:57.358: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211286694s
Jun 17 04:48:59.465: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.318486309s
[1mSTEP[0m: Saw pod success
Jun 17 04:48:59.466: INFO: Pod "hostpath-symlink-prep-provisioning-2688" satisfied condition "Succeeded or Failed"
Jun 17 04:48:59.466: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2688" in namespace "provisioning-2688"
Jun 17 04:48:59.576: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2688" to be fully deleted
Jun 17 04:48:59.681: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-vs7p
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:48:59.788: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vs7p" in namespace "provisioning-2688" to be "Succeeded or Failed"
Jun 17 04:48:59.902: INFO: Pod "pod-subpath-test-inlinevolume-vs7p": Phase="Pending", Reason="", readiness=false. Elapsed: 113.700416ms
Jun 17 04:49:02.012: INFO: Pod "pod-subpath-test-inlinevolume-vs7p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223526431s
Jun 17 04:49:04.118: INFO: Pod "pod-subpath-test-inlinevolume-vs7p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329536827s
Jun 17 04:49:06.224: INFO: Pod "pod-subpath-test-inlinevolume-vs7p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435862475s
Jun 17 04:49:08.330: INFO: Pod "pod-subpath-test-inlinevolume-vs7p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.542037888s
[1mSTEP[0m: Saw pod success
Jun 17 04:49:08.330: INFO: Pod "pod-subpath-test-inlinevolume-vs7p" satisfied condition "Succeeded or Failed"
Jun 17 04:49:08.436: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-vs7p container test-container-volume-inlinevolume-vs7p: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:49:08.656: INFO: Waiting for pod pod-subpath-test-inlinevolume-vs7p to disappear
Jun 17 04:49:08.762: INFO: Pod pod-subpath-test-inlinevolume-vs7p no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-vs7p
Jun 17 04:49:08.762: INFO: Deleting pod "pod-subpath-test-inlinevolume-vs7p" in namespace "provisioning-2688"
[1mSTEP[0m: Deleting pod
Jun 17 04:49:08.867: INFO: Deleting pod "pod-subpath-test-inlinevolume-vs7p" in namespace "provisioning-2688"
Jun 17 04:49:09.079: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2688" in namespace "provisioning-2688" to be "Succeeded or Failed"
Jun 17 04:49:09.185: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 105.464492ms
Jun 17 04:49:11.291: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211442857s
Jun 17 04:49:13.397: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317229324s
Jun 17 04:49:15.503: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423870915s
Jun 17 04:49:17.609: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529763582s
Jun 17 04:49:19.716: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63655805s
Jun 17 04:49:21.823: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743093851s
Jun 17 04:49:23.929: INFO: Pod "hostpath-symlink-prep-provisioning-2688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.849762295s
[1mSTEP[0m: Saw pod success
Jun 17 04:49:23.929: INFO: Pod "hostpath-symlink-prep-provisioning-2688" satisfied condition "Succeeded or Failed"
Jun 17 04:49:23.929: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2688" in namespace "provisioning-2688"
Jun 17 04:49:24.038: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2688" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:49:24.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-2688" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":55,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:24.409: INFO: Only supported for providers [vsphere] (not aws)
... skipping 24 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 17 04:49:08.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab" in namespace "downward-api-5115" to be "Succeeded or Failed"
Jun 17 04:49:08.187: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 105.150784ms
Jun 17 04:49:10.293: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211469652s
Jun 17 04:49:12.411: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32957935s
Jun 17 04:49:14.518: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436329599s
Jun 17 04:49:16.626: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545006358s
Jun 17 04:49:18.732: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650250489s
Jun 17 04:49:20.838: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 12.757015714s
Jun 17 04:49:22.944: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Pending", Reason="", readiness=false. Elapsed: 14.862636838s
Jun 17 04:49:25.051: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.969584833s
[1mSTEP[0m: Saw pod success
Jun 17 04:49:25.051: INFO: Pod "downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab" satisfied condition "Succeeded or Failed"
Jun 17 04:49:25.159: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:49:25.385: INFO: Waiting for pod downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab to disappear
Jun 17 04:49:25.490: INFO: Pod downwardapi-volume-e0693d47-36db-44b5-8d96-9658e8ca58ab no longer exists
[AfterEach] [sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:18.465 seconds][0m
[sig-storage] Downward API volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":115,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:25.715: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 37 lines ...
Jun 17 04:48:57.148: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:48:57.260: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:48:57.368: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:48:57.476: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:48:57.584: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:48:57.691: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:48:57.691: INFO: Lookups using dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local]
Jun 17 04:49:02.803: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:02.914: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:03.022: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:03.130: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:03.238: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:03.346: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:03.454: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:03.562: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:03.562: INFO: Lookups using dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local]
Jun 17 04:49:07.801: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:07.909: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:08.017: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:08.125: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:08.233: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:08.341: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:08.449: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:08.556: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:08.556: INFO: Lookups using dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local]
Jun 17 04:49:12.800: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:12.907: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:13.014: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:13.122: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:13.230: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:13.337: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:13.446: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:13.554: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:13.554: INFO: Lookups using dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local]
Jun 17 04:49:17.801: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:17.909: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:18.017: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:18.315: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:18.432: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:18.543: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:18.834: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:18.947: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:18.948: INFO: Lookups using dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local]
Jun 17 04:49:22.807: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:22.995: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:23.126: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:23.233: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:23.344: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:23.452: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:23.589: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:23.724: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local from pod dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c: the server could not find the requested resource (get pods dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c)
Jun 17 04:49:23.724: INFO: Lookups using dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1714.svc.cluster.local jessie_udp@dns-test-service-2.dns-1714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1714.svc.cluster.local]
Jun 17 04:49:28.559: INFO: DNS probes using dns-1714/dns-test-f8cccc1c-6fa0-41c7-9d71-3753697c9e5c succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
[32m• [SLOW TEST:37.479 seconds][0m
[sig-network] DNS
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should provide DNS for pods for Subdomain [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":7,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:29.016: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 87 lines ...
[1mSTEP[0m: Deleting pod hostexec-ip-172-20-50-49.eu-west-1.compute.internal-jxbtk in namespace volumemode-5597
Jun 17 04:49:13.689: INFO: Deleting pod "pod-80d09889-e6ac-42cb-ab0d-4608d987875a" in namespace "volumemode-5597"
Jun 17 04:49:13.804: INFO: Wait up to 5m0s for pod "pod-80d09889-e6ac-42cb-ab0d-4608d987875a" to be fully deleted
[1mSTEP[0m: Deleting pv and pvc
Jun 17 04:49:32.015: INFO: Deleting PersistentVolumeClaim "pvc-cx7p4"
Jun 17 04:49:32.121: INFO: Deleting PersistentVolume "aws-7kfjf"
Jun 17 04:49:32.417: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0c10ebc19cdf9cb59", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c10ebc19cdf9cb59 is currently attached to i-04a941478363a42d3
status code: 400, request id: 4e97d1a1-9a02-420c-b4e9-d0be63b55412
Jun 17 04:49:38.044: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0c10ebc19cdf9cb59".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:49:38.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volumemode-5597" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":32,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:38.284: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Jun 17 04:49:25.276: INFO: Waiting up to 5m0s for pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc" in namespace "security-context-test-476" to be "Succeeded or Failed"
Jun 17 04:49:25.384: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Pending", Reason="", readiness=false. Elapsed: 108.760651ms
Jun 17 04:49:27.491: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214854085s
Jun 17 04:49:29.597: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321648947s
Jun 17 04:49:31.704: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4282492s
Jun 17 04:49:33.811: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535383016s
Jun 17 04:49:35.918: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641787723s
Jun 17 04:49:38.024: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.747871273s
Jun 17 04:49:40.129: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.853546636s
Jun 17 04:49:40.129: INFO: Pod "busybox-user-0-a220e4c8-e5b8-4536-8998-4e40db0f02bc" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:49:40.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-476" for this suite.
... skipping 2 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
When creating a container with runAsUser
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50[0m
should run the container with uid 0 [LinuxOnly] [NodeConformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":62,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 45 lines ...
Jun 17 04:46:18.789: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3237 to register on node ip-172-20-46-241.eu-west-1.compute.internal
[1mSTEP[0m: Creating pod
Jun 17 04:46:35.682: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 17 04:46:35.791: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5kf62] to have phase Bound
Jun 17 04:46:35.898: INFO: PersistentVolumeClaim pvc-5kf62 found and phase=Bound (106.703878ms)
[1mSTEP[0m: checking for CSIInlineVolumes feature
Jun 17 04:46:44.663: INFO: Error getting logs for pod inline-volume-vmz96: the server rejected our request for an unknown reason (get pods inline-volume-vmz96)
Jun 17 04:46:44.876: INFO: Deleting pod "inline-volume-vmz96" in namespace "csi-mock-volumes-3237"
Jun 17 04:46:44.984: INFO: Wait up to 5m0s for pod "inline-volume-vmz96" to be fully deleted
[1mSTEP[0m: Deleting the previously created pod
Jun 17 04:48:51.199: INFO: Deleting pod "pvc-volume-tester-gkxtz" in namespace "csi-mock-volumes-3237"
Jun 17 04:48:51.306: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gkxtz" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 17 04:48:53.642: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-3237
Jun 17 04:48:53.642: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 1646dd9e-3113-45f6-bb80-79406ec9c636
Jun 17 04:48:53.642: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jun 17 04:48:53.642: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Jun 17 04:48:53.642: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-gkxtz
Jun 17 04:48:53.642: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1646dd9e-3113-45f6-bb80-79406ec9c636/volumes/kubernetes.io~csi/pvc-e60e24ba-538c-4b85-a7c2-745a87b2d574/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-gkxtz
Jun 17 04:48:53.642: INFO: Deleting pod "pvc-volume-tester-gkxtz" in namespace "csi-mock-volumes-3237"
[1mSTEP[0m: Deleting claim pvc-5kf62
Jun 17 04:48:53.963: INFO: Waiting up to 2m0s for PersistentVolume pvc-e60e24ba-538c-4b85-a7c2-745a87b2d574 to get deleted
Jun 17 04:48:54.070: INFO: PersistentVolume pvc-e60e24ba-538c-4b85-a7c2-745a87b2d574 found and phase=Released (106.87335ms)
Jun 17 04:48:56.176: INFO: PersistentVolume pvc-e60e24ba-538c-4b85-a7c2-745a87b2d574 found and phase=Released (2.213396037s)
... skipping 48 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
CSI workload information using mock driver
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:469[0m
should be passed when podInfoOnMount=true
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:519[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":4,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:40.709: INFO: Only supported for providers [azure] (not aws)
... skipping 92 lines ...
[32m• [SLOW TEST:101.994 seconds][0m
[sig-apps] CronJob
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
should remove from active list jobs that have been deleted
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:239[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":5,"skipped":32,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:41.385: INFO: Driver emptydir doesn't support ext4 -- skipping
... skipping 161 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
Deployment should have a working scale subresource [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":12,"skipped":117,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:41.492: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 246 lines ...
[1mSTEP[0m: Deleting pod hostexec-ip-172-20-46-241.eu-west-1.compute.internal-2v9r9 in namespace volumemode-7901
Jun 17 04:49:20.910: INFO: Deleting pod "pod-eea8a3a3-9721-4c1b-8f33-58a551cbfef9" in namespace "volumemode-7901"
Jun 17 04:49:21.016: INFO: Wait up to 5m0s for pod "pod-eea8a3a3-9721-4c1b-8f33-58a551cbfef9" to be fully deleted
[1mSTEP[0m: Deleting pv and pvc
Jun 17 04:49:43.226: INFO: Deleting PersistentVolumeClaim "pvc-vflbz"
Jun 17 04:49:43.332: INFO: Deleting PersistentVolume "aws-dr7fq"
Jun 17 04:49:43.629: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0ce6c0e0b5b8d60cf", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ce6c0e0b5b8d60cf is currently attached to i-0a5a9db282eaa50d8
status code: 400, request id: 1bbe8c52-edb4-425d-968e-d716b1d219c6
Jun 17 04:49:49.293: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0ce6c0e0b5b8d60cf".
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:49:49.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volumemode-7901" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":8,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:49.510: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 9 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m
[36mOnly supported for providers [gce gke] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:49.516: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 63 lines ...
Jun 17 04:49:14.956: INFO: PersistentVolumeClaim pvc-twzst found but phase is Pending instead of Bound.
Jun 17 04:49:17.061: INFO: PersistentVolumeClaim pvc-twzst found and phase=Bound (8.527831776s)
Jun 17 04:49:17.061: INFO: Waiting up to 3m0s for PersistentVolume local-c7w4h to have phase Bound
Jun 17 04:49:17.166: INFO: PersistentVolume local-c7w4h found and phase=Bound (104.786527ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-4vc6
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:49:17.508: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-4vc6" in namespace "volume-9018" to be "Succeeded or Failed"
Jun 17 04:49:17.612: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 104.391775ms
Jun 17 04:49:19.718: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210086582s
Jun 17 04:49:21.824: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315968232s
Jun 17 04:49:23.929: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421221778s
Jun 17 04:49:26.035: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527339727s
Jun 17 04:49:28.140: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.632333916s
... skipping 4 lines ...
Jun 17 04:49:38.670: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.161798293s
Jun 17 04:49:40.775: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.266590435s
Jun 17 04:49:42.880: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.371878914s
Jun 17 04:49:44.986: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.478212388s
Jun 17 04:49:47.095: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.587174842s
[1mSTEP[0m: Saw pod success
Jun 17 04:49:47.095: INFO: Pod "exec-volume-test-preprovisionedpv-4vc6" satisfied condition "Succeeded or Failed"
Jun 17 04:49:47.200: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-4vc6 container exec-container-preprovisionedpv-4vc6: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:49:47.418: INFO: Waiting for pod exec-volume-test-preprovisionedpv-4vc6 to disappear
Jun 17 04:49:47.523: INFO: Pod exec-volume-test-preprovisionedpv-4vc6 no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-4vc6
Jun 17 04:49:47.523: INFO: Deleting pod "exec-volume-test-preprovisionedpv-4vc6" in namespace "volume-9018"
... skipping 24 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":77,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:50.011: INFO: Only supported for providers [azure] (not aws)
... skipping 14 lines ...
[36mOnly supported for providers [azure] (not aws)[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1576
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":8,"skipped":44,"failed":0}
[BeforeEach] [sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:49:20.766: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename deployment
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
[32m• [SLOW TEST:30.419 seconds][0m
[sig-apps] Deployment
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
deployment should delete old replica sets [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:51.211: INFO: Only supported for providers [openstack] (not aws)
... skipping 34 lines ...
[1mSTEP[0m: Destroying namespace "apply-997" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56
[32m•[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":9,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:51.262: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 234 lines ...
[32m• [SLOW TEST:40.472 seconds][0m
[sig-apps] Deployment
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
RollingUpdateDeployment should delete old pods and create new ones [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:51.970: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 105 lines ...
[32m• [SLOW TEST:12.751 seconds][0m
[sig-api-machinery] ResourceQuota
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:530[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":9,"skipped":65,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 83 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
Two pods mounting a local volume one after the other
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m
should be able to write from pod1 and read from pod2
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":27,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:54.386: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:49:55.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "events-3655" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":9,"skipped":38,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:49:56.134: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 41 lines ...
[32m• [SLOW TEST:73.767 seconds][0m
[sig-storage] ConfigMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
updates should be reflected in volume [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:00.494: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 107 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m
One pod requesting one prebound PVC
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m
should be able to mount volume and read from pod1
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":72,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:03.071: INFO: Only supported for providers [openstack] (not aws)
... skipping 58 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m
should have a working scale subresource [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":16,"skipped":125,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:04.232: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
[Driver: emptydir]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m
[36mDriver emptydir doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":9,"skipped":75,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:49:11.032: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename gc
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 138 lines ...
[32m• [SLOW TEST:54.590 seconds][0m
[sig-api-machinery] Garbage collector
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should orphan pods created by rc if delete options say so [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":10,"skipped":75,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:05.649: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
[32m• [SLOW TEST:15.273 seconds][0m
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should be able to convert from CR v1 to CR v2 [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":10,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:06.745: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 75 lines ...
[32m• [SLOW TEST:25.369 seconds][0m
[sig-api-machinery] Aggregator
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":13,"skipped":119,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:06.900: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 133 lines ...
[32m• [SLOW TEST:16.069 seconds][0m
[sig-apps] Deployment
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m
deployment should support proportional scaling [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":10,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:07.448: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 450 lines ...
[32m• [SLOW TEST:18.403 seconds][0m
[sig-network] Service endpoints latency
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m
should not be very high [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":12,"skipped":74,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:08.010: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 81 lines ...
Jun 17 04:48:18.219: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6576
Jun 17 04:48:18.326: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6576
Jun 17 04:48:18.433: INFO: creating *v1.StatefulSet: csi-mock-volumes-6576-1386/csi-mockplugin
Jun 17 04:48:18.542: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6576
Jun 17 04:48:18.648: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6576"
Jun 17 04:48:18.754: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6576 to register on node ip-172-20-39-216.eu-west-1.compute.internal
I0617 04:48:24.104930 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0617 04:48:24.212611 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6576","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0617 04:48:24.318765 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0617 04:48:24.425579 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0617 04:48:24.668579 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6576","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0617 04:48:25.489051 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6576"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod with fsGroup
Jun 17 04:48:28.786: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 17 04:48:28.894: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-hj5f9] to have phase Bound
I0617 04:48:28.910690 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d2763b76-0487-4a20-826c-330485ba43ef","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d2763b76-0487-4a20-826c-330485ba43ef"}}},"Error":"","FullError":null}
Jun 17 04:48:29.000: INFO: PersistentVolumeClaim pvc-hj5f9 found but phase is Pending instead of Bound.
Jun 17 04:48:31.110: INFO: PersistentVolumeClaim pvc-hj5f9 found and phase=Bound (2.215186427s)
I0617 04:48:31.700998 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0617 04:48:31.817149 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0617 04:48:31.924571 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 17 04:48:32.039: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 04:48:32.040: INFO: ExecWithOptions: Clientset creation
Jun 17 04:48:32.040: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-6576-1386/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fpv%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fpv%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING))
I0617 04:48:32.761911 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2763b76-0487-4a20-826c-330485ba43ef/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d2763b76-0487-4a20-826c-330485ba43ef","storage.kubernetes.io/csiProvisionerIdentity":"1655441304480-8081-csi-mock-csi-mock-volumes-6576"}},"Response":{},"Error":"","FullError":null}
I0617 04:48:32.870890 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0617 04:48:32.979471 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0617 04:48:33.085528 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 17 04:48:33.191: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 04:48:33.192: INFO: ExecWithOptions: Clientset creation
Jun 17 04:48:33.192: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-6576-1386/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F73a29aee-4959-44c1-acdc-38c0971af631%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F73a29aee-4959-44c1-acdc-38c0971af631%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING))
Jun 17 04:48:33.908: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 04:48:33.909: INFO: ExecWithOptions: Clientset creation
Jun 17 04:48:33.909: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-6576-1386/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F73a29aee-4959-44c1-acdc-38c0971af631%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F73a29aee-4959-44c1-acdc-38c0971af631%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING))
Jun 17 04:48:34.616: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 04:48:34.617: INFO: ExecWithOptions: Clientset creation
Jun 17 04:48:34.617: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-6576-1386/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F73a29aee-4959-44c1-acdc-38c0971af631%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING))
I0617 04:48:35.413674 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2763b76-0487-4a20-826c-330485ba43ef/globalmount","target_path":"/var/lib/kubelet/pods/73a29aee-4959-44c1-acdc-38c0971af631/volumes/kubernetes.io~csi/pvc-d2763b76-0487-4a20-826c-330485ba43ef/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d2763b76-0487-4a20-826c-330485ba43ef","storage.kubernetes.io/csiProvisionerIdentity":"1655441304480-8081-csi-mock-csi-mock-volumes-6576"}},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Deleting pod pvc-volume-tester-vmd8l
Jun 17 04:48:37.639: INFO: Deleting pod "pvc-volume-tester-vmd8l" in namespace "csi-mock-volumes-6576"
Jun 17 04:48:37.745: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vmd8l" to be fully deleted
Jun 17 04:49:09.868: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 04:49:09.869: INFO: ExecWithOptions: Clientset creation
Jun 17 04:49:09.869: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-amzn2-k23-docker.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-6576-1386/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F73a29aee-4959-44c1-acdc-38c0971af631%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-d2763b76-0487-4a20-826c-330485ba43ef%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING))
I0617 04:49:10.593492 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/73a29aee-4959-44c1-acdc-38c0971af631/volumes/kubernetes.io~csi/pvc-d2763b76-0487-4a20-826c-330485ba43ef/mount"},"Response":{},"Error":"","FullError":null}
I0617 04:49:10.782701 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0617 04:49:10.890730 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2763b76-0487-4a20-826c-330485ba43ef/globalmount"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Deleting claim pvc-hj5f9
Jun 17 04:49:12.222: INFO: Waiting up to 2m0s for PersistentVolume pvc-d2763b76-0487-4a20-826c-330485ba43ef to get deleted
I0617 04:49:12.270771 6632 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
Jun 17 04:49:12.330: INFO: PersistentVolume pvc-d2763b76-0487-4a20-826c-330485ba43ef found and phase=Released (108.108825ms)
Jun 17 04:49:14.436: INFO: PersistentVolume pvc-d2763b76-0487-4a20-826c-330485ba43ef was removed
[1mSTEP[0m: Deleting storageclass csi-mock-volumes-6576-sc9pc6s
[1mSTEP[0m: Cleaning up resources
[1mSTEP[0m: deleting the test namespace: csi-mock-volumes-6576
[1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-6576] to vanish
... skipping 40 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
Delegate FSGroup to CSI driver [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1721[0m
should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1737[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP","total":-1,"completed":13,"skipped":106,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:09.655: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:50:09.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-1391" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":11,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:09.824: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 26 lines ...
[It] should support existing single file [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jun 17 04:50:01.287: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 17 04:50:01.287: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-mhrd
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:50:01.394: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mhrd" in namespace "provisioning-6185" to be "Succeeded or Failed"
Jun 17 04:50:01.499: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Pending", Reason="", readiness=false. Elapsed: 105.240553ms
Jun 17 04:50:03.615: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221328878s
Jun 17 04:50:05.723: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329394808s
Jun 17 04:50:07.829: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435091405s
Jun 17 04:50:09.934: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540410501s
Jun 17 04:50:12.044: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649605867s
Jun 17 04:50:14.151: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.757307021s
Jun 17 04:50:16.260: INFO: Pod "pod-subpath-test-inlinevolume-mhrd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.865847527s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:16.260: INFO: Pod "pod-subpath-test-inlinevolume-mhrd" satisfied condition "Succeeded or Failed"
Jun 17 04:50:16.367: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-mhrd container test-container-subpath-inlinevolume-mhrd: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:16.598: INFO: Waiting for pod pod-subpath-test-inlinevolume-mhrd to disappear
Jun 17 04:50:16.703: INFO: Pod pod-subpath-test-inlinevolume-mhrd no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-mhrd
Jun 17 04:50:16.703: INFO: Deleting pod "pod-subpath-test-inlinevolume-mhrd" in namespace "provisioning-6185"
... skipping 12 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":65,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Jun 17 04:49:44.319: INFO: PersistentVolumeClaim pvc-24nkh found but phase is Pending instead of Bound.
Jun 17 04:49:46.454: INFO: PersistentVolumeClaim pvc-24nkh found and phase=Bound (8.560875387s)
Jun 17 04:49:46.454: INFO: Waiting up to 3m0s for PersistentVolume local-lzkw5 to have phase Bound
Jun 17 04:49:46.573: INFO: PersistentVolume local-lzkw5 found and phase=Bound (118.805823ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tvq8
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:49:46.899: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tvq8" in namespace "provisioning-6564" to be "Succeeded or Failed"
Jun 17 04:49:47.004: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Pending", Reason="", readiness=false. Elapsed: 104.991663ms
Jun 17 04:49:49.129: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230237963s
Jun 17 04:49:51.235: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 4.336757691s
Jun 17 04:49:53.342: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 6.443080432s
Jun 17 04:49:55.448: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 8.549245334s
Jun 17 04:49:57.557: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 10.65870467s
... skipping 3 lines ...
Jun 17 04:50:06.015: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 19.116641996s
Jun 17 04:50:08.122: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 21.223090252s
Jun 17 04:50:10.228: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 23.329243309s
Jun 17 04:50:12.336: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Running", Reason="", readiness=true. Elapsed: 25.43761632s
Jun 17 04:50:14.454: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.555280053s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:14.454: INFO: Pod "pod-subpath-test-preprovisionedpv-tvq8" satisfied condition "Succeeded or Failed"
Jun 17 04:50:14.566: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tvq8 container test-container-subpath-preprovisionedpv-tvq8: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:14.862: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tvq8 to disappear
Jun 17 04:50:14.971: INFO: Pod pod-subpath-test-preprovisionedpv-tvq8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tvq8
Jun 17 04:50:14.971: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tvq8" in namespace "provisioning-6564"
... skipping 26 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":11,"skipped":119,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:17.223: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 109 lines ...
Jun 17 04:50:00.469: INFO: PersistentVolumeClaim pvc-lmtb5 found but phase is Pending instead of Bound.
Jun 17 04:50:02.577: INFO: PersistentVolumeClaim pvc-lmtb5 found and phase=Bound (14.854132861s)
Jun 17 04:50:02.577: INFO: Waiting up to 3m0s for PersistentVolume local-56r4x to have phase Bound
Jun 17 04:50:02.687: INFO: PersistentVolume local-56r4x found and phase=Bound (109.250352ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-gwjl
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 17 04:50:03.012: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-gwjl" in namespace "volume-2438" to be "Succeeded or Failed"
Jun 17 04:50:03.122: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl": Phase="Pending", Reason="", readiness=false. Elapsed: 109.465267ms
Jun 17 04:50:05.239: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227004176s
Jun 17 04:50:07.350: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33797862s
Jun 17 04:50:09.461: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449063264s
Jun 17 04:50:11.570: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.558364761s
Jun 17 04:50:13.678: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.666093874s
Jun 17 04:50:15.786: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.773590361s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:15.786: INFO: Pod "exec-volume-test-preprovisionedpv-gwjl" satisfied condition "Succeeded or Failed"
Jun 17 04:50:15.892: INFO: Trying to get logs from node ip-172-20-38-101.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-gwjl container exec-container-preprovisionedpv-gwjl: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:16.129: INFO: Waiting for pod exec-volume-test-preprovisionedpv-gwjl to disappear
Jun 17 04:50:16.235: INFO: Pod exec-volume-test-preprovisionedpv-gwjl no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-gwjl
Jun 17 04:50:16.236: INFO: Deleting pod "exec-volume-test-preprovisionedpv-gwjl" in namespace "volume-2438"
... skipping 19 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":57,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:17.655: INFO: Only supported for providers [gce gke] (not aws)
... skipping 125 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
CSI Volume expansion
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:641[0m
should not expand volume if resizingOnDriver=off, resizingOnSC=on
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:670[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":5,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:20.411: INFO: Only supported for providers [gce gke] (not aws)
... skipping 69 lines ...
[32m• [SLOW TEST:15.159 seconds][0m
[sig-api-machinery] ResourceQuota
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a pod. [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":11,"skipped":77,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:22.022: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 37 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0}
[BeforeEach] [sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:49:48.379: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename subpath
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
[1mSTEP[0m: Setting up data
[It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating pod pod-subpath-test-configmap-w4dx
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 17 04:49:49.440: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w4dx" in namespace "subpath-146" to be "Succeeded or Failed"
Jun 17 04:49:49.544: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Pending", Reason="", readiness=false. Elapsed: 103.960204ms
Jun 17 04:49:51.648: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208169034s
Jun 17 04:49:53.754: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314016128s
Jun 17 04:49:55.860: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.419542461s
Jun 17 04:49:57.964: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Running", Reason="", readiness=true. Elapsed: 8.52427204s
Jun 17 04:50:00.071: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Running", Reason="", readiness=true. Elapsed: 10.630993123s
... skipping 6 lines ...
Jun 17 04:50:14.861: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Running", Reason="", readiness=true. Elapsed: 25.420863375s
Jun 17 04:50:16.965: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Running", Reason="", readiness=true. Elapsed: 27.525181555s
Jun 17 04:50:19.071: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Running", Reason="", readiness=true. Elapsed: 29.630853236s
Jun 17 04:50:21.176: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Running", Reason="", readiness=true. Elapsed: 31.735923314s
Jun 17 04:50:23.282: INFO: Pod "pod-subpath-test-configmap-w4dx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.841717946s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:23.282: INFO: Pod "pod-subpath-test-configmap-w4dx" satisfied condition "Succeeded or Failed"
Jun 17 04:50:23.386: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-subpath-test-configmap-w4dx container test-container-subpath-configmap-w4dx: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:23.622: INFO: Waiting for pod pod-subpath-test-configmap-w4dx to disappear
Jun 17 04:50:23.726: INFO: Pod pod-subpath-test-configmap-w4dx no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-configmap-w4dx
Jun 17 04:50:23.726: INFO: Deleting pod "pod-subpath-test-configmap-w4dx" in namespace "subpath-146"
... skipping 8 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m
should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
Jun 17 04:50:08.038: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 17 04:50:08.874: INFO: Waiting up to 5m0s for pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a" in namespace "downward-api-5305" to be "Succeeded or Failed"
Jun 17 04:50:08.979: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Pending", Reason="", readiness=false. Elapsed: 104.524356ms
Jun 17 04:50:11.095: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220322112s
Jun 17 04:50:13.203: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328701941s
Jun 17 04:50:15.321: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446768978s
Jun 17 04:50:17.426: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551013123s
Jun 17 04:50:19.532: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657915231s
Jun 17 04:50:21.637: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.762405423s
Jun 17 04:50:23.750: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.875116761s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:23.750: INFO: Pod "downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a" satisfied condition "Succeeded or Failed"
Jun 17 04:50:23.854: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:24.107: INFO: Waiting for pod downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a to disappear
Jun 17 04:50:24.211: INFO: Pod downward-api-f88698d4-ba37-4e2a-b9bb-f3ea4b9e574a no longer exists
[AfterEach] [sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:16.385 seconds][0m
[sig-node] Downward API
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m
should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":78,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:24.444: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 69 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-b3575d48-d5d6-4fa1-9e15-0912a2e92815
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 17 04:50:18.234: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f" in namespace "projected-4578" to be "Succeeded or Failed"
Jun 17 04:50:18.340: INFO: Pod "pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f": Phase="Pending", Reason="", readiness=false. Elapsed: 106.06559ms
Jun 17 04:50:20.445: INFO: Pod "pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211466565s
Jun 17 04:50:22.551: INFO: Pod "pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317200011s
Jun 17 04:50:24.657: INFO: Pod "pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.423517109s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:24.658: INFO: Pod "pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f" satisfied condition "Succeeded or Failed"
Jun 17 04:50:24.762: INFO: Trying to get logs from node ip-172-20-50-49.eu-west-1.compute.internal pod pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:24.984: INFO: Waiting for pod pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f to disappear
Jun 17 04:50:25.090: INFO: Pod pod-projected-configmaps-987ab12f-6490-4062-97e6-f63ac655ee3f no longer exists
[AfterEach] [sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
[32m• [SLOW TEST:8.023 seconds][0m
[sig-storage] Projected configMap
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":126,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:25.324: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 116 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Jun 17 04:49:53.906: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 04:49:54.125: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4999" in namespace "provisioning-4999" to be "Succeeded or Failed"
Jun 17 04:49:54.231: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 105.351008ms
Jun 17 04:49:56.340: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214174014s
Jun 17 04:49:58.448: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322377083s
Jun 17 04:50:00.555: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429804948s
Jun 17 04:50:02.665: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540053437s
Jun 17 04:50:04.774: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.64876619s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:04.774: INFO: Pod "hostpath-symlink-prep-provisioning-4999" satisfied condition "Succeeded or Failed"
Jun 17 04:50:04.774: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4999" in namespace "provisioning-4999"
Jun 17 04:50:04.884: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4999" to be fully deleted
Jun 17 04:50:04.996: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-xpdx
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:50:05.104: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xpdx" in namespace "provisioning-4999" to be "Succeeded or Failed"
Jun 17 04:50:05.223: INFO: Pod "pod-subpath-test-inlinevolume-xpdx": Phase="Pending", Reason="", readiness=false. Elapsed: 119.812465ms
Jun 17 04:50:07.337: INFO: Pod "pod-subpath-test-inlinevolume-xpdx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233454764s
Jun 17 04:50:09.444: INFO: Pod "pod-subpath-test-inlinevolume-xpdx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340522459s
Jun 17 04:50:11.550: INFO: Pod "pod-subpath-test-inlinevolume-xpdx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446834886s
Jun 17 04:50:13.657: INFO: Pod "pod-subpath-test-inlinevolume-xpdx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553246695s
Jun 17 04:50:15.769: INFO: Pod "pod-subpath-test-inlinevolume-xpdx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665219081s
Jun 17 04:50:17.875: INFO: Pod "pod-subpath-test-inlinevolume-xpdx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.77133956s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:17.875: INFO: Pod "pod-subpath-test-inlinevolume-xpdx" satisfied condition "Succeeded or Failed"
Jun 17 04:50:17.980: INFO: Trying to get logs from node ip-172-20-46-241.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-xpdx container test-container-subpath-inlinevolume-xpdx: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:18.207: INFO: Waiting for pod pod-subpath-test-inlinevolume-xpdx to disappear
Jun 17 04:50:18.316: INFO: Pod pod-subpath-test-inlinevolume-xpdx no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-xpdx
Jun 17 04:50:18.316: INFO: Deleting pod "pod-subpath-test-inlinevolume-xpdx" in namespace "provisioning-4999"
[1mSTEP[0m: Deleting pod
Jun 17 04:50:18.421: INFO: Deleting pod "pod-subpath-test-inlinevolume-xpdx" in namespace "provisioning-4999"
Jun 17 04:50:18.640: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4999" in namespace "provisioning-4999" to be "Succeeded or Failed"
Jun 17 04:50:18.745: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 105.317812ms
Jun 17 04:50:20.851: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211127525s
Jun 17 04:50:22.959: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319271253s
Jun 17 04:50:25.068: INFO: Pod "hostpath-symlink-prep-provisioning-4999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42799124s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:25.068: INFO: Pod "hostpath-symlink-prep-provisioning-4999" satisfied condition "Succeeded or Failed"
Jun 17 04:50:25.068: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4999" in namespace "provisioning-4999"
Jun 17 04:50:25.178: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4999" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:50:25.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-4999" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 04:50:25.507: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 134 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Jun 17 04:49:50.769: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 04:49:50.983: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4818" in namespace "provisioning-4818" to be "Succeeded or Failed"
Jun 17 04:49:51.088: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 104.511629ms
Jun 17 04:49:53.193: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209823215s
Jun 17 04:49:55.299: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316061833s
Jun 17 04:49:57.404: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421323083s
Jun 17 04:49:59.544: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561267964s
Jun 17 04:50:01.651: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.667957284s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:01.651: INFO: Pod "hostpath-symlink-prep-provisioning-4818" satisfied condition "Succeeded or Failed"
Jun 17 04:50:01.651: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4818" in namespace "provisioning-4818"
Jun 17 04:50:01.761: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4818" to be fully deleted
Jun 17 04:50:01.875: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-kkwr
[1mSTEP[0m: Creating a pod to test subpath
Jun 17 04:50:01.982: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kkwr" in namespace "provisioning-4818" to be "Succeeded or Failed"
Jun 17 04:50:02.102: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 119.416894ms
Jun 17 04:50:04.216: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233576222s
Jun 17 04:50:06.324: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341936271s
Jun 17 04:50:08.429: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447220581s
Jun 17 04:50:10.536: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553737112s
Jun 17 04:50:12.646: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664148424s
Jun 17 04:50:14.765: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.782761834s
Jun 17 04:50:16.873: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.890887037s
Jun 17 04:50:18.979: INFO: Pod "pod-subpath-test-inlinevolume-kkwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.996352799s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:18.979: INFO: Pod "pod-subpath-test-inlinevolume-kkwr" satisfied condition "Succeeded or Failed"
Jun 17 04:50:19.083: INFO: Trying to get logs from node ip-172-20-39-216.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-kkwr container test-container-subpath-inlinevolume-kkwr: <nil>
[1mSTEP[0m: delete the pod
Jun 17 04:50:19.315: INFO: Waiting for pod pod-subpath-test-inlinevolume-kkwr to disappear
Jun 17 04:50:19.420: INFO: Pod pod-subpath-test-inlinevolume-kkwr no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-kkwr
Jun 17 04:50:19.420: INFO: Deleting pod "pod-subpath-test-inlinevolume-kkwr" in namespace "provisioning-4818"
[1mSTEP[0m: Deleting pod
Jun 17 04:50:19.524: INFO: Deleting pod "pod-subpath-test-inlinevolume-kkwr" in namespace "provisioning-4818"
Jun 17 04:50:19.748: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4818" in namespace "provisioning-4818" to be "Succeeded or Failed"
Jun 17 04:50:19.854: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 105.491669ms
Jun 17 04:50:21.958: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210304537s
Jun 17 04:50:24.067: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318522632s
Jun 17 04:50:26.175: INFO: Pod "hostpath-symlink-prep-provisioning-4818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.426815237s
[1mSTEP[0m: Saw pod success
Jun 17 04:50:26.175: INFO: Pod "hostpath-symlink-prep-provisioning-4818" satisfied condition "Succeeded or Failed"
Jun 17 04:50:26.175: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4818" in namespace "provisioning-4818"
Jun 17 04:50:26.299: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4818" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 04:50:26.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-4818" for this suite.
... skipping 6 lines ...
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m
[Testpattern: Inline-volume (default fs)] subPath
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":81,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
[1mSTEP[0m: Creating a kubernetes client
... skipping 53512 lines ...
al-5969-3252/csi-hostpathplugin-768846f5c4\" objectUID=b0c30b72-9066-4801-a93c-ae574b98c4ac kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:51:49.229666 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5969-3252/csi-hostpathplugin-0\" objectUID=9c8fe2a7-aad4-4f9f-a1b1-5f8246c3d0a4 kind=\"Pod\" propagationPolicy=Background\nI0617 04:51:49.902959 10 namespace_controller.go:185] Namespace has been deleted certificates-1355\nI0617 04:51:49.975070 10 namespace_controller.go:185] Namespace has been deleted ephemeral-4698-2233\nI0617 04:51:50.204619 10 namespace_controller.go:185] Namespace has been deleted provisioning-3029\nI0617 04:51:50.214968 10 garbagecollector.go:468] \"Processing object\" object=\"provisioning-3029-4178/csi-hostpathplugin-5cb9f8fbb9\" objectUID=2ac65d8f-734f-4d51-8d86-96ed312d06ca kind=\"ControllerRevision\" virtual=false\nI0617 04:51:50.215309 10 stateful_set.go:443] StatefulSet has been deleted provisioning-3029-4178/csi-hostpathplugin\nI0617 04:51:50.215350 10 garbagecollector.go:468] \"Processing object\" object=\"provisioning-3029-4178/csi-hostpathplugin-0\" objectUID=f5b82a67-b893-4779-a323-2f4f23e29f11 kind=\"Pod\" virtual=false\nI0617 04:51:50.217093 10 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3029-4178/csi-hostpathplugin-5cb9f8fbb9\" objectUID=2ac65d8f-734f-4d51-8d86-96ed312d06ca kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:51:50.218298 10 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3029-4178/csi-hostpathplugin-0\" objectUID=f5b82a67-b893-4779-a323-2f4f23e29f11 kind=\"Pod\" propagationPolicy=Background\nE0617 04:51:50.311346 10 tokens_controller.go:262] error synchronizing serviceaccount nettest-7250/default: secrets \"default-token-wnwp8\" is forbidden: unable to create new content in namespace nettest-7250 because it is being terminated\nI0617 04:51:50.743216 10 garbagecollector.go:468] \"Processing object\" object=\"dns-8598/dns-test-7d3accb0-b93c-4be0-8c39-3d196598a99b\" objectUID=f4b835da-a01f-4bac-b12b-a2c2c1f4f226 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:51:50.746702 10 garbagecollector.go:580] \"Deleting object\" object=\"dns-8598/dns-test-7d3accb0-b93c-4be0-8c39-3d196598a99b\" objectUID=f4b835da-a01f-4bac-b12b-a2c2c1f4f226 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0617 04:51:51.006922 10 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-732/default: secrets \"default-token-5hwd8\" is forbidden: unable to create new content in namespace security-context-test-732 because it is being terminated\nE0617 04:51:53.130658 10 tokens_controller.go:262] error synchronizing serviceaccount pods-8398/default: secrets \"default-token-nwkp2\" is forbidden: unable to create new content in namespace pods-8398 because it is being terminated\nE0617 04:51:53.896515 10 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6652/default: secrets \"default-token-22tzt\" is forbidden: unable to create new content in namespace csi-mock-volumes-6652 because it is being terminated\nI0617 04:51:54.075887 10 event.go:294] \"Event occurred\" object=\"provisioning-5972-2646/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0617 04:51:54.424550 10 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-3043-4283\nI0617 04:51:54.599817 10 event.go:294] \"Event occurred\" object=\"provisioning-5972/pvc-9ffpw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5972\\\" or manually created by system administrator\"\nI0617 04:51:56.080216 10 namespace_controller.go:185] Namespace has been deleted security-context-test-732\nE0617 04:51:56.226855 10 tokens_controller.go:262] error synchronizing serviceaccount dns-8598/default: secrets \"default-token-rxmdh\" is forbidden: unable to create new content in namespace dns-8598 because it is being terminated\nI0617 04:51:56.771833 10 namespace_controller.go:185] Namespace has been deleted nettest-8675\nI0617 04:51:57.289808 10 event.go:294] \"Event occurred\" object=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-7298\\\" or manually created by system administrator\"\nI0617 04:51:57.290035 10 event.go:294] \"Event occurred\" object=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-7298\\\" or manually created by system administrator\"\nI0617 04:51:57.301599 10 pv_controller.go:887] volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" entered phase \"Bound\"\nI0617 04:51:57.301624 10 pv_controller.go:990] volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" bound to claim \"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\"\nI0617 04:51:57.308132 10 pv_controller.go:831] claim \"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" entered phase \"Bound\"\nI0617 04:51:58.043550 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-6f846766f6\" objectUID=a4d4f3ec-3d80-4e4b-b40c-067d5a86e650 kind=\"ControllerRevision\" virtual=false\nI0617 04:51:58.044119 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-0\" objectUID=9ff4b575-e792-4fcd-94a2-5300fcc47437 kind=\"Pod\" virtual=false\nI0617 04:51:58.044248 10 stateful_set.go:443] StatefulSet has been deleted csi-mock-volumes-6652-1539/csi-mockplugin\nI0617 04:51:58.046435 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-6f846766f6\" objectUID=a4d4f3ec-3d80-4e4b-b40c-067d5a86e650 kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:51:58.046609 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-0\" objectUID=9ff4b575-e792-4fcd-94a2-5300fcc47437 kind=\"Pod\" propagationPolicy=Background\nI0617 04:51:58.334251 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-attacher-6485c548d6\" objectUID=febd296b-8928-4d5e-9631-11ddebd1b5f9 kind=\"ControllerRevision\" virtual=false\nI0617 04:51:58.334493 10 stateful_set.go:443] StatefulSet has been deleted csi-mock-volumes-6652-1539/csi-mockplugin-attacher\nI0617 04:51:58.334563 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-attacher-0\" objectUID=02c02c34-ec61-4efd-885a-d171534a0e35 kind=\"Pod\" virtual=false\nI0617 04:51:58.337576 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-attacher-6485c548d6\" objectUID=febd296b-8928-4d5e-9631-11ddebd1b5f9 kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:51:58.337943 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6652-1539/csi-mockplugin-attacher-0\" objectUID=02c02c34-ec61-4efd-885a-d171534a0e35 kind=\"Pod\" propagationPolicy=Background\nI0617 04:51:58.661356 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^375961b9-edf9-11ec-a365-66fc70675f4a\") from node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:51:58.939930 10 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6652\nI0617 04:51:59.169282 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^375961b9-edf9-11ec-a365-66fc70675f4a\") from node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:51:59.169568 10 event.go:294] \"Event occurred\" object=\"ephemeral-7298/inline-volume-tester2-vldfg\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\\\" \"\nE0617 04:51:59.361855 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9712/default: secrets \"default-token-89b87\" is forbidden: unable to create new content in namespace provisioning-9712 because it is being terminated\nI0617 04:51:59.587269 10 namespace_controller.go:185] Namespace has been deleted ephemeral-5969-3252\nE0617 04:51:59.892550 10 tokens_controller.go:262] error synchronizing serviceaccount kubectl-5128/default: secrets \"default-token-tgq5j\" is forbidden: unable to create new content in namespace kubectl-5128 because it is being terminated\nI0617 04:52:00.049240 10 pv_controller.go:887] volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" entered phase \"Bound\"\nI0617 04:52:00.049267 10 pv_controller.go:990] volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" bound to claim \"provisioning-5972/pvc-9ffpw\"\nI0617 04:52:00.055409 10 pv_controller.go:831] claim \"provisioning-5972/pvc-9ffpw\" entered phase \"Bound\"\nI0617 04:52:00.514783 10 pv_controller.go:938] claim \"provisioning-4947/pvc-4gtlf\" bound to volume \"local-cw7jf\"\nI0617 04:52:00.522019 10 pv_controller.go:887] volume \"local-cw7jf\" entered phase \"Bound\"\nI0617 04:52:00.522515 10 pv_controller.go:990] volume \"local-cw7jf\" bound to claim \"provisioning-4947/pvc-4gtlf\"\nI0617 04:52:00.530691 10 pv_controller.go:831] claim \"provisioning-4947/pvc-4gtlf\" entered phase \"Bound\"\nI0617 04:52:00.531244 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:00.792288 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"pvc-protection-6621/pvc-tester-tt6sl\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:00.792487 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:00.857508 10 namespace_controller.go:185] Namespace has been deleted provisioning-3029-4178\nI0617 04:52:01.005157 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"pvc-protection-6621/pvc-tester-tt6sl\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.005188 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.010849 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"pvc-protection-6621/pvc-tester-tt6sl\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.010868 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.264970 10 namespace_controller.go:185] Namespace has been deleted dns-8598\nI0617 04:52:01.289619 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-4594/pvc-78sgk\"\nI0617 04:52:01.295839 10 pv_controller.go:648] volume \"local-6vb7s\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:52:01.299173 10 pv_controller.go:887] volume \"local-6vb7s\" entered phase \"Released\"\nI0617 04:52:01.329814 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"pvc-protection-6621/pvc-tester-tt6sl\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.329833 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.333521 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"pvc-protection-6621/pvc-tester-tt6sl\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.333538 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:01.403811 10 pv_controller_base.go:533] deletion of claim \"provisioning-4594/pvc-78sgk\" was already processed\nW0617 04:52:01.610950 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:01.610973 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:02.471248 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-1128, name: inline-volume-tester-52vzp, uid: 8ce87c9c-e937-4243-ae94-fb55c5501050] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:02.471410 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\" objectUID=582e3c37-7490-47ea-afca-60e202f9fd66 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:02.472007 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\" objectUID=058bbdb3-eda9-445c-9ab0-ed3a48505945 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:02.472094 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp\" objectUID=aa694ae5-b792-421d-bdc3-b3fafcff83ba kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:02.473924 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp\" objectUID=8ce87c9c-e937-4243-ae94-fb55c5501050 kind=\"Pod\" virtual=false\nI0617 04:52:02.477289 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1128, name: inline-volume-tester-52vzp-my-volume-0, uid: 582e3c37-7490-47ea-afca-60e202f9fd66] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1128, name: inline-volume-tester-52vzp, uid: 8ce87c9c-e937-4243-ae94-fb55c5501050] is deletingDependents\nI0617 04:52:02.477308 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1128, name: inline-volume-tester-52vzp-my-volume-1, uid: 058bbdb3-eda9-445c-9ab0-ed3a48505945] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1128, name: inline-volume-tester-52vzp, uid: 8ce87c9c-e937-4243-ae94-fb55c5501050] is deletingDependents\nI0617 04:52:02.482085 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\" objectUID=058bbdb3-eda9-445c-9ab0-ed3a48505945 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:02.482315 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\" objectUID=582e3c37-7490-47ea-afca-60e202f9fd66 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:02.482501 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1128/inline-volume-tester-52vzp\" objectUID=aa694ae5-b792-421d-bdc3-b3fafcff83ba kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:02.486433 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\" objectUID=582e3c37-7490-47ea-afca-60e202f9fd66 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:02.489164 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp\" objectUID=8ce87c9c-e937-4243-ae94-fb55c5501050 kind=\"Pod\" virtual=false\nI0617 04:52:02.489734 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\" objectUID=058bbdb3-eda9-445c-9ab0-ed3a48505945 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:02.491108 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1128/inline-volume-tester-52vzp\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\"\nI0617 04:52:02.491122 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\"\nI0617 04:52:02.491686 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1128/inline-volume-tester-52vzp\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\"\nI0617 04:52:02.491698 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\"\nI0617 04:52:02.492690 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\" objectUID=582e3c37-7490-47ea-afca-60e202f9fd66 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:02.492988 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1128, name: inline-volume-tester-52vzp-my-volume-0, uid: 582e3c37-7490-47ea-afca-60e202f9fd66] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1128, name: inline-volume-tester-52vzp, uid: 8ce87c9c-e937-4243-ae94-fb55c5501050] is deletingDependents\nI0617 04:52:02.493007 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1128, name: inline-volume-tester-52vzp-my-volume-1, uid: 058bbdb3-eda9-445c-9ab0-ed3a48505945] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1128, name: inline-volume-tester-52vzp, uid: 8ce87c9c-e937-4243-ae94-fb55c5501050] is deletingDependents\nI0617 04:52:02.496238 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\" objectUID=058bbdb3-eda9-445c-9ab0-ed3a48505945 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:02.496548 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\" objectUID=582e3c37-7490-47ea-afca-60e202f9fd66 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:02.498535 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\" objectUID=058bbdb3-eda9-445c-9ab0-ed3a48505945 kind=\"PersistentVolumeClaim\" virtual=false\nW0617 04:52:03.527099 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:03.527189 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:03.704739 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^38f510e3-edf9-11ec-a4a3-4e4408ec2313\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:04.240044 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^38f510e3-edf9-11ec-a4a3-4e4408ec2313\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:04.240202 10 event.go:294] \"Event occurred\" object=\"provisioning-5972/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\\\" \"\nI0617 04:52:04.433483 10 namespace_controller.go:185] Namespace has been deleted provisioning-9712\nI0617 04:52:04.947568 10 namespace_controller.go:185] Namespace has been deleted kubectl-5128\nI0617 04:52:05.019578 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"pvc-protection-6621/pvc-tester-tt6sl\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:05.019597 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:05.419480 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"pvc-protection-6621/pvc-tester-tt6sl\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:05.420243 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:05.426696 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"pvc-protection-6621/pvc-protection25p8z\"\nI0617 04:52:05.433902 10 pv_controller.go:648] volume \"pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:52:05.437293 10 pv_controller.go:887] volume \"pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5\" entered phase \"Released\"\nI0617 04:52:05.439534 10 pv_controller.go:1348] isVolumeReleased[pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5]: volume is released\nI0617 04:52:05.641440 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-1973/agnhost-primary\" need=1 creating=1\nI0617 04:52:05.661935 10 event.go:294] \"Event occurred\" object=\"kubectl-1973/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-rkzdr\"\nE0617 04:52:05.779953 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8262/default: secrets \"default-token-lzzdm\" is forbidden: unable to create new content in namespace provisioning-8262 because it is being terminated\nI0617 04:52:05.908621 10 namespace_controller.go:185] Namespace has been deleted nettest-7250\nI0617 04:52:06.024149 10 event.go:294] \"Event occurred\" object=\"provisioning-1437/aws2h9l9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0617 04:52:06.239869 10 event.go:294] \"Event occurred\" object=\"provisioning-1437/aws2h9l9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:06.240149 10 event.go:294] \"Event occurred\" object=\"provisioning-1437/aws2h9l9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:06.346167 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b081a8af61f8d6db\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:52:06.349195 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b081a8af61f8d6db\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:52:07.075727 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-7286/pod-release\" need=1 creating=1\nI0617 04:52:07.081268 10 event.go:294] \"Event occurred\" object=\"replication-controller-7286/pod-release\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-6b2mg\"\nI0617 04:52:07.399742 10 controller_ref_manager.go:239] patching pod replication-controller-7286_pod-release-6b2mg to remove its controllerRef to v1/ReplicationController:pod-release\nI0617 04:52:07.404609 10 garbagecollector.go:468] \"Processing object\" object=\"replication-controller-7286/pod-release\" objectUID=2a5de682-585e-464b-92b5-c5c4dfdd6ab7 kind=\"ReplicationController\" virtual=false\nI0617 04:52:07.405193 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-7286/pod-release\" need=1 creating=1\nI0617 04:52:07.409040 10 garbagecollector.go:507] object [v1/ReplicationController, namespace: replication-controller-7286, name: pod-release, uid: 2a5de682-585e-464b-92b5-c5c4dfdd6ab7]'s doesn't have an owner, continue on next item\nI0617 04:52:07.410745 10 event.go:294] \"Event occurred\" object=\"replication-controller-7286/pod-release\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-dfjxn\"\nI0617 04:52:07.465306 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester2-vldfg, uid: df08e4d2-ec37-4ba8-b31a-9939aba9964a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:07.465390 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg\" objectUID=d104e877-5775-4fe7-a2ee-0727187bf32c kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:07.465968 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg\" objectUID=df08e4d2-ec37-4ba8-b31a-9939aba9964a kind=\"Pod\" virtual=false\nI0617 04:52:07.466093 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" objectUID=edf91ebb-567f-45ac-a7d8-0afe3641a631 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:07.473063 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7298, name: inline-volume-tester2-vldfg-my-volume-0, uid: edf91ebb-567f-45ac-a7d8-0afe3641a631] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester2-vldfg, uid: df08e4d2-ec37-4ba8-b31a-9939aba9964a] is deletingDependents\nI0617 04:52:07.474614 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg\" objectUID=d104e877-5775-4fe7-a2ee-0727187bf32c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:07.474949 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" objectUID=edf91ebb-567f-45ac-a7d8-0afe3641a631 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:07.480301 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg\" objectUID=df08e4d2-ec37-4ba8-b31a-9939aba9964a kind=\"Pod\" virtual=false\nI0617 04:52:07.480904 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-7298/inline-volume-tester2-vldfg\" PVC=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\"\nI0617 04:52:07.480921 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\"\nI0617 04:52:07.481030 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" objectUID=edf91ebb-567f-45ac-a7d8-0afe3641a631 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:07.481909 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7298, name: inline-volume-tester2-vldfg-my-volume-0, uid: edf91ebb-567f-45ac-a7d8-0afe3641a631] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester2-vldfg, uid: df08e4d2-ec37-4ba8-b31a-9939aba9964a] is deletingDependents\nI0617 04:52:07.481954 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" objectUID=edf91ebb-567f-45ac-a7d8-0afe3641a631 kind=\"PersistentVolumeClaim\" virtual=false\nE0617 04:52:07.506844 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4594/default: secrets \"default-token-w2g48\" is forbidden: unable to create new content in namespace provisioning-4594 because it is being terminated\nW0617 04:52:07.692236 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:07.692273 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0617 04:52:07.738174 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:07.738195 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:08.609801 10 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6652-1539\nI0617 04:52:09.626675 10 pv_controller.go:887] volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" entered phase \"Bound\"\nI0617 04:52:09.626720 10 pv_controller.go:990] volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" bound to claim \"provisioning-1437/aws2h9l9\"\nI0617 04:52:09.633425 10 pv_controller.go:831] claim \"provisioning-1437/aws2h9l9\" entered phase \"Bound\"\nE0617 04:52:10.066221 10 tokens_controller.go:262] error synchronizing serviceaccount secrets-418/default: secrets \"default-token-w9mfx\" is forbidden: unable to create new content in namespace secrets-418 because it is being terminated\nI0617 04:52:10.269037 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cecfbbe5e01da5e4\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:52:10.380623 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-4947/pvc-4gtlf\"\nI0617 04:52:10.386349 10 pv_controller.go:648] volume \"local-cw7jf\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:52:10.389946 10 pv_controller.go:887] volume \"local-cw7jf\" entered phase \"Released\"\nI0617 04:52:10.489171 10 pv_controller_base.go:533] deletion of claim \"provisioning-4947/pvc-4gtlf\" was already processed\nI0617 04:52:10.868845 10 namespace_controller.go:185] Namespace has been deleted provisioning-8262\nI0617 04:52:10.933774 10 namespace_controller.go:185] Namespace has been deleted webhook-1791\nE0617 04:52:11.333352 10 tokens_controller.go:262] error synchronizing serviceaccount pvc-protection-6621/default: secrets \"default-token-c67mf\" is forbidden: unable to create new content in namespace pvc-protection-6621 because it is being terminated\nE0617 04:52:11.710864 10 tokens_controller.go:262] error synchronizing serviceaccount limitrange-7680/default: secrets \"default-token-6nhvw\" is forbidden: unable to create new content in namespace limitrange-7680 because it is being terminated\nI0617 04:52:12.519700 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-1843/deployment-8d545c96d\" need=3 creating=3\nI0617 04:52:12.520566 10 event.go:294] \"Event occurred\" object=\"apply-1843/deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-8d545c96d to 3\"\nI0617 04:52:12.526713 10 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-1843/deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:52:12.528170 10 event.go:294] \"Event occurred\" object=\"apply-1843/deployment-8d545c96d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-8d545c96d-zpz29\"\nI0617 04:52:12.541165 10 event.go:294] \"Event occurred\" object=\"apply-1843/deployment-8d545c96d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-8d545c96d-lr297\"\nI0617 04:52:12.541204 10 event.go:294] \"Event occurred\" object=\"apply-1843/deployment-8d545c96d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-8d545c96d-zqxj8\"\nI0617 04:52:12.562277 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cecfbbe5e01da5e4\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:52:12.563192 10 event.go:294] \"Event occurred\" object=\"provisioning-1437/pod-subpath-test-dynamicpv-rnzc\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-08825914-98e2-40bb-93f0-f035774cdafb\\\" \"\nI0617 04:52:12.627526 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-1843/deployment-7c658794b9\" need=1 creating=1\nI0617 04:52:12.628192 10 event.go:294] \"Event occurred\" object=\"apply-1843/deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-7c658794b9 to 1\"\nI0617 04:52:12.633446 10 event.go:294] \"Event occurred\" object=\"apply-1843/deployment-7c658794b9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-7c658794b9-kbjcm\"\nI0617 04:52:12.656143 10 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-1843/deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:52:12.656524 10 namespace_controller.go:185] Namespace has been deleted provisioning-4594\nE0617 04:52:12.690282 10 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-3793/default: secrets \"default-token-8qgpd\" is forbidden: unable to create new content in namespace ephemeral-3793 because it is being terminated\nE0617 04:52:12.722729 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:12.811453 10 garbagecollector.go:468] \"Processing object\" object=\"replication-controller-7286/pod-release-dfjxn\" objectUID=ca06c116-1065-42f9-bda0-300e2a485a6c kind=\"Pod\" virtual=false\nI0617 04:52:12.814635 10 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-7286/pod-release-dfjxn\" objectUID=ca06c116-1065-42f9-bda0-300e2a485a6c kind=\"Pod\" propagationPolicy=Background\nE0617 04:52:12.943323 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:13.050461 10 garbagecollector.go:468] \"Processing object\" object=\"apply-1843/deployment-7c658794b9\" objectUID=cdbe5b1b-b781-4114-805d-87c4df986188 kind=\"ReplicaSet\" virtual=false\nI0617 04:52:13.050609 10 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"apply-1843/deployment\"\nI0617 04:52:13.050767 10 garbagecollector.go:468] \"Processing object\" object=\"apply-1843/deployment-8d545c96d\" objectUID=6e4dc9b5-5152-4393-86e4-aaccedade8e9 kind=\"ReplicaSet\" virtual=false\nI0617 04:52:13.052433 10 garbagecollector.go:580] \"Deleting object\" object=\"apply-1843/deployment-7c658794b9\" objectUID=cdbe5b1b-b781-4114-805d-87c4df986188 kind=\"ReplicaSet\" propagationPolicy=Background\nI0617 04:52:13.053465 10 garbagecollector.go:580] \"Deleting object\" object=\"apply-1843/deployment-8d545c96d\" objectUID=6e4dc9b5-5152-4393-86e4-aaccedade8e9 kind=\"ReplicaSet\" propagationPolicy=Background\nI0617 04:52:13.056053 10 garbagecollector.go:468] \"Processing object\" object=\"apply-1843/deployment-7c658794b9-kbjcm\" objectUID=2d544817-6038-4735-8886-a93e9c8a155f kind=\"Pod\" virtual=false\nI0617 04:52:13.058277 10 garbagecollector.go:580] \"Deleting object\" object=\"apply-1843/deployment-7c658794b9-kbjcm\" objectUID=2d544817-6038-4735-8886-a93e9c8a155f kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:13.058532 10 garbagecollector.go:468] \"Processing object\" object=\"apply-1843/deployment-8d545c96d-zpz29\" objectUID=7bcd0653-506e-490f-be46-1e9c502b32d7 kind=\"Pod\" virtual=false\nI0617 04:52:13.058682 10 garbagecollector.go:468] \"Processing object\" object=\"apply-1843/deployment-8d545c96d-lr297\" objectUID=75ee42f3-353d-4db1-ad02-c55fa21ac8aa kind=\"Pod\" virtual=false\nI0617 04:52:13.058790 10 garbagecollector.go:468] \"Processing object\" object=\"apply-1843/deployment-8d545c96d-zqxj8\" objectUID=7c125bb9-db9c-48ec-b42f-106143a01594 kind=\"Pod\" virtual=false\nI0617 04:52:13.061452 10 garbagecollector.go:580] \"Deleting object\" object=\"apply-1843/deployment-8d545c96d-zpz29\" objectUID=7bcd0653-506e-490f-be46-1e9c502b32d7 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:13.062280 10 garbagecollector.go:580] \"Deleting object\" object=\"apply-1843/deployment-8d545c96d-lr297\" objectUID=75ee42f3-353d-4db1-ad02-c55fa21ac8aa kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:13.062525 10 garbagecollector.go:580] \"Deleting object\" object=\"apply-1843/deployment-8d545c96d-zqxj8\" objectUID=7c125bb9-db9c-48ec-b42f-106143a01594 kind=\"Pod\" propagationPolicy=Background\nE0617 04:52:13.073785 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nE0617 04:52:13.195305 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nE0617 04:52:13.328126 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nE0617 04:52:13.495820 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:13.805289 10 namespace_controller.go:185] Namespace has been deleted provisioning-5142\nI0617 04:52:13.863469 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b081a8af61f8d6db\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nE0617 04:52:13.923838 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nE0617 04:52:14.348293 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:14.359561 10 namespace_controller.go:185] Namespace has been deleted svcaccounts-8645\nI0617 04:52:15.109287 10 namespace_controller.go:185] Namespace has been deleted secrets-418\nE0617 04:52:15.152326 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:15.516560 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:15.519644 10 pv_controller.go:1348] isVolumeReleased[pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5]: volume is released\nI0617 04:52:16.347679 10 namespace_controller.go:185] Namespace has been deleted pvc-protection-6621\nE0617 04:52:16.521784 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nE0617 04:52:16.642098 10 tokens_controller.go:262] error synchronizing serviceaccount pv-6169/default: secrets \"default-token-qzf4r\" is forbidden: unable to create new content in namespace pv-6169 because it is being terminated\nI0617 04:52:16.780430 10 namespace_controller.go:185] Namespace has been deleted limitrange-7680\nI0617 04:52:17.895779 10 namespace_controller.go:185] Namespace has been deleted ephemeral-3793\nE0617 04:52:17.942398 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-1007/inline-volume-tpt8b-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:52:17.943314 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tpt8b-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:52:17.965358 10 namespace_controller.go:185] Namespace has been deleted replication-controller-7286\nI0617 04:52:18.259551 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tpt8b, uid: 60e07ef4-9864-4ffe-92c5-51d32a0aeb57] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:18.260019 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tpt8b\" objectUID=60e07ef4-9864-4ffe-92c5-51d32a0aeb57 kind=\"Pod\" virtual=false\nI0617 04:52:18.260359 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tpt8b-my-volume\" objectUID=d3979de3-5ba6-438c-a7ae-3b200795506d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:18.274018 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1007, name: inline-volume-tpt8b-my-volume, uid: d3979de3-5ba6-438c-a7ae-3b200795506d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tpt8b, uid: 60e07ef4-9864-4ffe-92c5-51d32a0aeb57] is deletingDependents\nI0617 04:52:18.275201 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1007/inline-volume-tpt8b-my-volume\" objectUID=d3979de3-5ba6-438c-a7ae-3b200795506d kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0617 04:52:18.278186 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-1007/inline-volume-tpt8b-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:52:18.278615 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tpt8b-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:52:18.278672 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tpt8b-my-volume\" objectUID=d3979de3-5ba6-438c-a7ae-3b200795506d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:18.281250 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-1007/inline-volume-tpt8b-my-volume\"\nI0617 04:52:18.284568 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tpt8b\" objectUID=60e07ef4-9864-4ffe-92c5-51d32a0aeb57 kind=\"Pod\" virtual=false\nI0617 04:52:18.285845 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tpt8b, uid: 60e07ef4-9864-4ffe-92c5-51d32a0aeb57]\nE0617 04:52:18.841996 10 tokens_controller.go:262] error synchronizing serviceaccount kubectl-1973/default: secrets \"default-token-kh6l2\" is forbidden: unable to create new content in namespace kubectl-1973 because it is being terminated\nI0617 04:52:18.891497 10 garbagecollector.go:468] \"Processing object\" object=\"kubectl-1973/agnhost-primary-rkzdr\" objectUID=c7d1437a-3d60-45db-be5d-2067810cc88f kind=\"Pod\" virtual=false\nI0617 04:52:18.895077 10 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-1973/agnhost-primary-rkzdr\" objectUID=c7d1437a-3d60-45db-be5d-2067810cc88f kind=\"Pod\" propagationPolicy=Background\nE0617 04:52:19.334037 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nE0617 04:52:20.594199 10 pv_protection_controller.go:114] PV pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-ab36f98e-1e0d-4b1b-907a-17f78989d2d5\": the object has been modified; please apply your changes to the latest version and try again\nI0617 04:52:20.597391 10 pv_controller_base.go:533] deletion of claim \"pvc-protection-6621/pvc-protection25p8z\" was already processed\nE0617 04:52:21.520303 10 pv_controller.go:1459] error finding provisioning plugin for claim provisioning-9369/pvc-c27kn: storageclass.storage.k8s.io \"provisioning-9369\" not found\nI0617 04:52:21.520802 10 event.go:294] \"Event occurred\" object=\"provisioning-9369/pvc-c27kn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9369\\\" not found\"\nI0617 04:52:21.630848 10 pv_controller.go:887] volume \"local-jr467\" entered phase \"Available\"\nI0617 04:52:21.696235 10 namespace_controller.go:185] Namespace has been deleted pv-6169\nW0617 04:52:22.174123 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:22.174265 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:23.138157 10 event.go:294] \"Event occurred\" object=\"provisioning-5972/pvc-2pt2m\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5972\\\" or manually created by system administrator\"\nI0617 04:52:23.176021 10 namespace_controller.go:185] Namespace has been deleted provisioning-4947\nI0617 04:52:23.237139 10 pv_controller.go:887] volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" entered phase \"Bound\"\nI0617 04:52:23.237301 10 pv_controller.go:990] volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" bound to claim \"provisioning-5972/pvc-2pt2m\"\nI0617 04:52:23.249583 10 pv_controller.go:831] claim \"provisioning-5972/pvc-2pt2m\" entered phase \"Bound\"\nI0617 04:52:23.278430 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^46c444c9-edf9-11ec-a4a3-4e4408ec2313\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:23.398657 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007-3379/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0617 04:52:23.703140 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-pqlz6 to be scheduled\"\nI0617 04:52:23.711993 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-pqlz6 to be scheduled\"\nI0617 04:52:23.812714 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^46c444c9-edf9-11ec-a4a3-4e4408ec2313\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:23.812989 10 event.go:294] \"Event occurred\" object=\"provisioning-5972/hostpath-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\\\" \"\nI0617 04:52:24.103132 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^38f510e3-edf9-11ec-a4a3-4e4408ec2313\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:24.104763 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^38f510e3-edf9-11ec-a4a3-4e4408ec2313\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nE0617 04:52:24.549705 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:24.623457 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^38f510e3-edf9-11ec-a4a3-4e4408ec2313\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:24.745129 10 namespace_controller.go:185] Namespace has been deleted tables-4014\nW0617 04:52:25.622156 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:25.622417 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:25.661802 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-1007\\\" or manually created by system administrator\"\nI0617 04:52:25.666028 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-1007\\\" or manually created by system administrator\"\nI0617 04:52:25.671567 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-1007\\\" or manually created by system administrator\"\nI0617 04:52:26.129797 10 namespace_controller.go:185] Namespace has been deleted kubectl-1671\nI0617 04:52:27.182259 10 namespace_controller.go:185] Namespace has been deleted metadata-concealment-6192\nE0617 04:52:27.743018 10 tokens_controller.go:262] error synchronizing serviceaccount projected-5510/default: secrets \"default-token-c4mlk\" is forbidden: unable to create new content in namespace projected-5510 because it is being terminated\nI0617 04:52:28.840333 10 pv_controller.go:887] volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" entered phase \"Bound\"\nI0617 04:52:28.840732 10 pv_controller.go:990] volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" bound to claim \"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\"\nI0617 04:52:28.847758 10 pv_controller.go:831] claim \"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" entered phase \"Bound\"\nI0617 04:52:28.853352 10 pv_controller.go:887] volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" entered phase \"Bound\"\nI0617 04:52:28.853382 10 pv_controller.go:990] volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" bound to claim \"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\"\nI0617 04:52:28.865760 10 pv_controller.go:831] claim \"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" entered phase \"Bound\"\nI0617 04:52:29.750223 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a25ff8e-edf9-11ec-83e4-befb265ca60a\") from node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:52:29.750246 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a264cfc-edf9-11ec-83e4-befb265ca60a\") from node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:52:30.262440 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a264cfc-edf9-11ec-83e4-befb265ca60a\") from node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:52:30.262666 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\\\" \"\nI0617 04:52:30.289510 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a25ff8e-edf9-11ec-83e4-befb265ca60a\") from node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:52:30.289739 10 event.go:294] \"Event occurred\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\\\" \"\nE0617 04:52:30.321308 10 tokens_controller.go:262] error synchronizing serviceaccount nettest-4398/default: secrets \"default-token-rll4g\" is forbidden: unable to create new content in namespace nettest-4398 because it is being terminated\nI0617 04:52:30.516569 10 pv_controller.go:938] claim \"provisioning-9369/pvc-c27kn\" bound to volume \"local-jr467\"\nI0617 04:52:30.516843 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:30.528923 10 pv_controller.go:887] volume \"local-jr467\" entered phase \"Bound\"\nI0617 04:52:30.528947 10 pv_controller.go:990] volume \"local-jr467\" bound to claim \"provisioning-9369/pvc-c27kn\"\nI0617 04:52:30.534880 10 pv_controller.go:831] claim \"provisioning-9369/pvc-c27kn\" entered phase \"Bound\"\nI0617 04:52:31.359594 10 event.go:294] \"Event occurred\" object=\"webhook-3815/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6c69dbd86b to 1\"\nI0617 04:52:31.359774 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3815/sample-webhook-deployment-6c69dbd86b\" need=1 creating=1\nI0617 04:52:31.371104 10 event.go:294] \"Event occurred\" object=\"webhook-3815/sample-webhook-deployment-6c69dbd86b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6c69dbd86b-z92w7\"\nI0617 04:52:31.373384 10 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3815/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:52:31.756016 10 garbagecollector.go:210] syncing garbage collector with updated resources from discovery (attempt 1): added: [mygroup.example.com/v1, Resource=foorz59fas], removed: []\nI0617 04:52:31.773254 10 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0617 04:52:31.786937 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:31.791571 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:31.792132 10 event.go:294] \"Event occurred\" object=\"job-3451/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-k944b\"\nI0617 04:52:31.801027 10 event.go:294] \"Event occurred\" object=\"job-3451/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-vvk7b\"\nI0617 04:52:31.801432 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:31.801604 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:31.804223 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:31.807551 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:31.875803 10 shared_informer.go:247] Caches are synced for garbage collector \nI0617 04:52:31.875822 10 garbagecollector.go:251] synced garbage collector\nI0617 04:52:32.467239 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-1437/aws2h9l9\"\nI0617 04:52:32.472812 10 pv_controller.go:648] volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:52:32.476121 10 pv_controller.go:887] volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" entered phase \"Released\"\nI0617 04:52:32.477840 10 pv_controller.go:1348] isVolumeReleased[pvc-08825914-98e2-40bb-93f0-f035774cdafb]: volume is released\nI0617 04:52:32.812922 10 namespace_controller.go:185] Namespace has been deleted projected-5510\nE0617 04:52:33.053049 10 tokens_controller.go:262] error synchronizing serviceaccount container-probe-2429/default: secrets \"default-token-l5np4\" is forbidden: unable to create new content in namespace container-probe-2429 because it is being terminated\nI0617 04:52:33.061604 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1128/inline-volume-tester-52vzp\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\"\nI0617 04:52:33.061625 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\"\nI0617 04:52:33.061730 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1128/inline-volume-tester-52vzp\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\"\nI0617 04:52:33.061741 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\"\nI0617 04:52:33.068933 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\"\nI0617 04:52:33.076531 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp\" objectUID=8ce87c9c-e937-4243-ae94-fb55c5501050 kind=\"Pod\" virtual=false\nI0617 04:52:33.078960 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\"\nI0617 04:52:33.079712 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1128, name: inline-volume-tester-52vzp-my-volume-1, uid: 058bbdb3-eda9-445c-9ab0-ed3a48505945] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1128, name: inline-volume-tester-52vzp, uid: 8ce87c9c-e937-4243-ae94-fb55c5501050] is deletingDependents\nI0617 04:52:33.079767 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\" objectUID=058bbdb3-eda9-445c-9ab0-ed3a48505945 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:33.080876 10 pv_controller.go:648] volume \"pvc-582e3c37-7490-47ea-afca-60e202f9fd66\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:52:33.085150 10 pv_controller.go:887] volume \"pvc-582e3c37-7490-47ea-afca-60e202f9fd66\" entered phase \"Released\"\nI0617 04:52:33.087726 10 pv_controller.go:1348] isVolumeReleased[pvc-582e3c37-7490-47ea-afca-60e202f9fd66]: volume is released\nI0617 04:52:33.093272 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128/inline-volume-tester-52vzp\" objectUID=8ce87c9c-e937-4243-ae94-fb55c5501050 kind=\"Pod\" virtual=false\nI0617 04:52:33.095836 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-1128, name: inline-volume-tester-52vzp, uid: 8ce87c9c-e937-4243-ae94-fb55c5501050]\nI0617 04:52:33.095998 10 pv_controller.go:648] volume \"pvc-058bbdb3-eda9-445c-9ab0-ed3a48505945\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:52:33.101765 10 pv_controller.go:887] volume \"pvc-058bbdb3-eda9-445c-9ab0-ed3a48505945\" entered phase \"Released\"\nI0617 04:52:33.105416 10 pv_controller.go:1348] isVolumeReleased[pvc-058bbdb3-eda9-445c-9ab0-ed3a48505945]: volume is released\nI0617 04:52:33.212565 10 pv_controller_base.go:533] deletion of claim \"ephemeral-1128/inline-volume-tester-52vzp-my-volume-0\" was already processed\nI0617 04:52:33.369388 10 pv_controller_base.go:533] deletion of claim \"ephemeral-1128/inline-volume-tester-52vzp-my-volume-1\" was already processed\nI0617 04:52:34.324355 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-058bbdb3-eda9-445c-9ab0-ed3a48505945\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1128^3085d330-edf9-11ec-ae9d-fe43aeff16d9\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:34.342635 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-582e3c37-7490-47ea-afca-60e202f9fd66\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1128^3080d51d-edf9-11ec-ae9d-fe43aeff16d9\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:34.342961 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-058bbdb3-eda9-445c-9ab0-ed3a48505945\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1128^3085d330-edf9-11ec-ae9d-fe43aeff16d9\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:34.345004 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-582e3c37-7490-47ea-afca-60e202f9fd66\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1128^3080d51d-edf9-11ec-ae9d-fe43aeff16d9\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:34.515070 10 garbagecollector.go:468] \"Processing object\" object=\"emptydir-wrapper-8294/pod-secrets-fb7b96f1-a8c9-40d1-b529-26dd4606b31f\" objectUID=51e68c78-c768-4e86-b165-0c175105d584 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:34.517374 10 garbagecollector.go:580] \"Deleting object\" object=\"emptydir-wrapper-8294/pod-secrets-fb7b96f1-a8c9-40d1-b529-26dd4606b31f\" objectUID=51e68c78-c768-4e86-b165-0c175105d584 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0617 04:52:34.647927 10 tokens_controller.go:262] error synchronizing serviceaccount kubectl-2487/default: secrets \"default-token-9hspl\" is forbidden: unable to create new content in namespace kubectl-2487 because it is being terminated\nI0617 04:52:34.860700 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-582e3c37-7490-47ea-afca-60e202f9fd66\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1128^3080d51d-edf9-11ec-ae9d-fe43aeff16d9\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nE0617 04:52:34.880929 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:34.896014 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-058bbdb3-eda9-445c-9ab0-ed3a48505945\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1128^3085d330-edf9-11ec-ae9d-fe43aeff16d9\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nE0617 04:52:35.042901 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nI0617 04:52:35.066539 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cecfbbe5e01da5e4\") on node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:52:35.071553 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cecfbbe5e01da5e4\") on node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nE0617 04:52:35.157349 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nE0617 04:52:35.266107 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nE0617 04:52:35.372305 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nI0617 04:52:35.507136 10 namespace_controller.go:185] Namespace has been deleted pods-8398\nE0617 04:52:35.561179 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nE0617 04:52:35.740686 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nE0617 04:52:36.047886 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nE0617 04:52:36.228059 10 tokens_controller.go:262] error synchronizing serviceaccount container-probe-7639/default: secrets \"default-token-l26hb\" is forbidden: unable to create new content in namespace container-probe-7639 because it is being terminated\nE0617 04:52:36.573980 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nE0617 04:52:36.885301 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:37.397180 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nI0617 04:52:37.587794 10 event.go:294] \"Event occurred\" object=\"volume-expand-3465-358/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0617 04:52:37.782867 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:37.882314 10 event.go:294] \"Event occurred\" object=\"volume-expand-3465/csi-hostpathnzdlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-3465\\\" or manually created by system administrator\"\nI0617 04:52:37.885122 10 event.go:294] \"Event occurred\" object=\"volume-expand-3465/csi-hostpathnzdlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-3465\\\" or manually created by system administrator\"\nE0617 04:52:37.913463 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-9508/inline-volume-fg8df-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:52:37.913642 10 event.go:294] \"Event occurred\" object=\"ephemeral-9508/inline-volume-fg8df-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:52:38.225395 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-9508, name: inline-volume-fg8df, uid: 3061e87d-278e-4ca7-a858-af804dd9286e] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:38.225878 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-fg8df-my-volume\" objectUID=e8d3d365-6cf3-4f7c-9c9f-ce3417551ab1 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:38.226590 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-fg8df\" objectUID=3061e87d-278e-4ca7-a858-af804dd9286e kind=\"Pod\" virtual=false\nI0617 04:52:38.229932 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-9508, name: inline-volume-fg8df-my-volume, uid: e8d3d365-6cf3-4f7c-9c9f-ce3417551ab1] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-9508, name: inline-volume-fg8df, uid: 3061e87d-278e-4ca7-a858-af804dd9286e] is deletingDependents\nI0617 04:52:38.232855 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9508/inline-volume-fg8df-my-volume\" objectUID=e8d3d365-6cf3-4f7c-9c9f-ce3417551ab1 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0617 04:52:38.236162 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-9508/inline-volume-fg8df-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:52:38.236920 10 event.go:294] \"Event occurred\" object=\"ephemeral-9508/inline-volume-fg8df-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:52:38.237511 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-fg8df-my-volume\" objectUID=e8d3d365-6cf3-4f7c-9c9f-ce3417551ab1 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:38.240259 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-9508/inline-volume-fg8df-my-volume\"\nI0617 04:52:38.243888 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-fg8df\" objectUID=3061e87d-278e-4ca7-a858-af804dd9286e kind=\"Pod\" virtual=false\nI0617 04:52:38.245495 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-9508, name: inline-volume-fg8df, uid: 3061e87d-278e-4ca7-a858-af804dd9286e]\nW0617 04:52:38.281410 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:38.281431 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:38.552735 10 event.go:294] \"Event occurred\" object=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-4cjrx to be scheduled\"\nE0617 04:52:38.766098 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nI0617 04:52:38.983851 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:39.241906 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-7298/inline-volume-tester2-vldfg\" PVC=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\"\nI0617 04:52:39.242446 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\"\nI0617 04:52:39.434599 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\"\nI0617 04:52:39.438978 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester2-vldfg\" objectUID=df08e4d2-ec37-4ba8-b31a-9939aba9964a kind=\"Pod\" virtual=false\nI0617 04:52:39.441328 10 pv_controller.go:648] volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:52:39.441515 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester2-vldfg, uid: df08e4d2-ec37-4ba8-b31a-9939aba9964a]\nI0617 04:52:39.445557 10 pv_controller.go:887] volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" entered phase \"Released\"\nI0617 04:52:39.455829 10 pv_controller.go:1348] isVolumeReleased[pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631]: volume is released\nI0617 04:52:39.464486 10 pv_controller_base.go:533] deletion of claim \"ephemeral-7298/inline-volume-tester2-vldfg-my-volume-0\" was already processed\nI0617 04:52:39.608145 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-5972/pvc-2pt2m\"\nI0617 04:52:39.615816 10 pv_controller.go:648] volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:52:39.619004 10 pv_controller.go:887] volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" entered phase \"Released\"\nI0617 04:52:39.620514 10 pv_controller.go:1348] isVolumeReleased[pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e]: volume is released\nI0617 04:52:39.666568 10 event.go:294] \"Event occurred\" object=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:39.677612 10 pv_controller_base.go:533] deletion of claim \"provisioning-5972/pvc-2pt2m\" was already processed\nI0617 04:52:39.682011 10 namespace_controller.go:185] Namespace has been deleted kubectl-2487\nI0617 04:52:39.929679 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-5972/pvc-9ffpw\"\nE0617 04:52:39.936212 10 pvc_protection_controller.go:204] \"Error removing protection finalizer from PVC\" err=\"Operation cannot be fulfilled on persistentvolumeclaims \\\"pvc-9ffpw\\\": the object has been modified; please apply your changes to the latest version and try again\" PVC=\"provisioning-5972/pvc-9ffpw\"\nE0617 04:52:39.936230 10 pvc_protection_controller.go:142] PVC provisioning-5972/pvc-9ffpw failed with : Operation cannot be fulfilled on persistentvolumeclaims \"pvc-9ffpw\": the object has been modified; please apply your changes to the latest version and try again\nI0617 04:52:39.938073 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-5972/pvc-9ffpw\"\nI0617 04:52:39.943149 10 pv_controller.go:648] volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:52:39.952178 10 pv_controller.go:887] volume \"pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9\" entered phase \"Released\"\nI0617 04:52:39.953864 10 pv_controller.go:1348] isVolumeReleased[pvc-59d9dbc9-7bb5-4045-bda7-b7be6aff3ac9]: volume is released\nI0617 04:52:40.005857 10 pv_controller_base.go:533] deletion of claim \"provisioning-5972/pvc-9ffpw\" was already processed\nI0617 04:52:40.214810 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester-vkcnm, uid: 772410f9-a0f3-4b3e-bbad-221994b77177] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:40.214962 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\" objectUID=c7c7b694-46a9-42cc-9ac0-45b03f47056d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:40.215332 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm\" objectUID=873b79af-6363-4469-933a-7cf3d82e79c9 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:40.215358 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm\" objectUID=772410f9-a0f3-4b3e-bbad-221994b77177 kind=\"Pod\" virtual=false\nI0617 04:52:40.219413 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7298, name: inline-volume-tester-vkcnm-my-volume-0, uid: c7c7b694-46a9-42cc-9ac0-45b03f47056d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester-vkcnm, uid: 772410f9-a0f3-4b3e-bbad-221994b77177] is deletingDependents\nI0617 04:52:40.220670 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\" objectUID=c7c7b694-46a9-42cc-9ac0-45b03f47056d kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:40.222924 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm\" objectUID=873b79af-6363-4469-933a-7cf3d82e79c9 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:40.225795 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-7298/inline-volume-tester-vkcnm\" PVC=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\"\nI0617 04:52:40.226156 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\"\nI0617 04:52:40.226495 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\" objectUID=c7c7b694-46a9-42cc-9ac0-45b03f47056d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:40.234689 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm\" objectUID=772410f9-a0f3-4b3e-bbad-221994b77177 kind=\"Pod\" virtual=false\nI0617 04:52:40.238810 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7298, name: inline-volume-tester-vkcnm-my-volume-0, uid: c7c7b694-46a9-42cc-9ac0-45b03f47056d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester-vkcnm, uid: 772410f9-a0f3-4b3e-bbad-221994b77177] is deletingDependents\nI0617 04:52:40.238857 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\" objectUID=c7c7b694-46a9-42cc-9ac0-45b03f47056d kind=\"PersistentVolumeClaim\" virtual=false\nW0617 04:52:40.285944 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:40.285966 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0617 04:52:40.402475 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:40.402497 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0617 04:52:40.662979 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:40.663000 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:41.078988 10 garbagecollector.go:468] \"Processing object\" object=\"webhook-3815/e2e-test-webhook-dw8tx\" objectUID=d99da68d-b7d5-4b76-8d0d-bc814819cc5f kind=\"EndpointSlice\" virtual=false\nI0617 04:52:41.085676 10 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3815/e2e-test-webhook-dw8tx\" objectUID=d99da68d-b7d5-4b76-8d0d-bc814819cc5f kind=\"EndpointSlice\" propagationPolicy=Background\nI0617 04:52:41.188313 10 garbagecollector.go:468] \"Processing object\" object=\"webhook-3815/sample-webhook-deployment-6c69dbd86b\" objectUID=63da061a-564e-49a4-a937-b19d52c2b99e kind=\"ReplicaSet\" virtual=false\nI0617 04:52:41.188330 10 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-3815/sample-webhook-deployment\"\nI0617 04:52:41.190083 10 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3815/sample-webhook-deployment-6c69dbd86b\" objectUID=63da061a-564e-49a4-a937-b19d52c2b99e kind=\"ReplicaSet\" propagationPolicy=Background\nI0617 04:52:41.192282 10 garbagecollector.go:468] \"Processing object\" object=\"webhook-3815/sample-webhook-deployment-6c69dbd86b-z92w7\" objectUID=d909c507-cb2f-4f2d-9fd1-01b3d98a9ee6 kind=\"Pod\" virtual=false\nI0617 04:52:41.193739 10 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3815/sample-webhook-deployment-6c69dbd86b-z92w7\" objectUID=d909c507-cb2f-4f2d-9fd1-01b3d98a9ee6 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:41.199821 10 garbagecollector.go:468] \"Processing object\" object=\"webhook-3815/sample-webhook-deployment-6c69dbd86b-z92w7\" objectUID=7fe08636-886b-4122-a9f8-25c374d07d85 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:41.202128 10 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3815/sample-webhook-deployment-6c69dbd86b-z92w7\" objectUID=7fe08636-886b-4122-a9f8-25c374d07d85 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0617 04:52:41.304075 10 tokens_controller.go:262] error synchronizing serviceaccount apply-4465/default: secrets \"default-token-96wsz\" is forbidden: unable to create new content in namespace apply-4465 because it is being terminated\nE0617 04:52:41.476397 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nW0617 04:52:41.713684 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:41.713754 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0617 04:52:41.715566 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:41.715586 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:41.726765 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-08825914-98e2-40bb-93f0-f035774cdafb\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cecfbbe5e01da5e4\") on node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:52:41.732596 10 pv_controller.go:1348] isVolumeReleased[pvc-08825914-98e2-40bb-93f0-f035774cdafb]: volume is released\nI0617 04:52:41.782986 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:41.896055 10 pv_controller_base.go:533] deletion of claim \"provisioning-1437/aws2h9l9\" was already processed\nI0617 04:52:41.932638 10 namespace_controller.go:185] Namespace has been deleted subpath-8139\nI0617 04:52:42.042033 10 namespace_controller.go:185] Namespace has been deleted chunking-7442\nI0617 04:52:42.133640 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^375961b9-edf9-11ec-a365-66fc70675f4a\") on node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:52:42.136317 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^375961b9-edf9-11ec-a365-66fc70675f4a\") on node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:52:42.558945 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6, uid: 943eb37c-22db-43f9-b7c7-22e4d24f544c] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:42.558990 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" objectUID=a74733a6-6921-44c7-8af4-f06fd1723111 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:42.559619 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" objectUID=42678aaa-c7f8-41ef-acb9-714ff20fadbe kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:42.559756 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" objectUID=7ec1b6a4-0a4e-4fd7-876a-b464a199a7bd kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:42.559889 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" objectUID=943eb37c-22db-43f9-b7c7-22e4d24f544c kind=\"Pod\" virtual=false\nI0617 04:52:42.565895 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6-my-volume-0, uid: a74733a6-6921-44c7-8af4-f06fd1723111] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6, uid: 943eb37c-22db-43f9-b7c7-22e4d24f544c] is deletingDependents\nI0617 04:52:42.565917 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6-my-volume-1, uid: 42678aaa-c7f8-41ef-acb9-714ff20fadbe] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6, uid: 943eb37c-22db-43f9-b7c7-22e4d24f544c] is deletingDependents\nI0617 04:52:42.569072 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" objectUID=7ec1b6a4-0a4e-4fd7-876a-b464a199a7bd kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:42.569537 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" objectUID=a74733a6-6921-44c7-8af4-f06fd1723111 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:42.570573 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" objectUID=42678aaa-c7f8-41ef-acb9-714ff20fadbe kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:52:42.575067 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1007/inline-volume-tester-pqlz6\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\"\nI0617 04:52:42.575301 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\"\nI0617 04:52:42.577990 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1007/inline-volume-tester-pqlz6\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\"\nI0617 04:52:42.578163 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\"\nI0617 04:52:42.577489 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" objectUID=42678aaa-c7f8-41ef-acb9-714ff20fadbe kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:42.577516 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" objectUID=a74733a6-6921-44c7-8af4-f06fd1723111 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:42.577568 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" objectUID=943eb37c-22db-43f9-b7c7-22e4d24f544c kind=\"Pod\" virtual=false\nI0617 04:52:42.580845 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6-my-volume-1, uid: 42678aaa-c7f8-41ef-acb9-714ff20fadbe] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6, uid: 943eb37c-22db-43f9-b7c7-22e4d24f544c] is deletingDependents\nI0617 04:52:42.581053 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6-my-volume-0, uid: a74733a6-6921-44c7-8af4-f06fd1723111] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6, uid: 943eb37c-22db-43f9-b7c7-22e4d24f544c] is deletingDependents\nI0617 04:52:42.581298 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" objectUID=a74733a6-6921-44c7-8af4-f06fd1723111 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:42.581265 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" objectUID=42678aaa-c7f8-41ef-acb9-714ff20fadbe kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:42.652892 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-edf91ebb-567f-45ac-a7d8-0afe3641a631\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^375961b9-edf9-11ec-a365-66fc70675f4a\") on node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:52:42.783353 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:43.066391 10 pv_controller.go:887] volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" entered phase \"Bound\"\nI0617 04:52:43.066738 10 pv_controller.go:990] volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" bound to claim \"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\"\nI0617 04:52:43.087118 10 pv_controller.go:831] claim \"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" entered phase \"Bound\"\nI0617 04:52:43.107510 10 pv_controller.go:887] volume \"pvc-52358cb8-f735-4dc3-99f8-49ce51d343da\" entered phase \"Bound\"\nI0617 04:52:43.107809 10 pv_controller.go:990] volume \"pvc-52358cb8-f735-4dc3-99f8-49ce51d343da\" bound to claim \"volume-expand-3465/csi-hostpathnzdlw\"\nI0617 04:52:43.114796 10 pv_controller.go:831] claim \"volume-expand-3465/csi-hostpathnzdlw\" entered phase \"Bound\"\nI0617 04:52:43.257295 10 namespace_controller.go:185] Namespace has been deleted container-probe-2429\nI0617 04:52:43.745102 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00b2aeffaa7f9be55\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nE0617 04:52:44.131448 10 tokens_controller.go:262] error synchronizing serviceaccount container-probe-7764/default: secrets \"default-token-nwvvk\" is forbidden: unable to create new content in namespace container-probe-7764 because it is being terminated\nW0617 04:52:44.246515 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:44.246538 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:44.360857 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^46c444c9-edf9-11ec-a4a3-4e4408ec2313\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:44.366812 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^46c444c9-edf9-11ec-a4a3-4e4408ec2313\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:44.665603 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-52358cb8-f735-4dc3-99f8-49ce51d343da\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-3465^52a3809a-edf9-11ec-8ad9-0e2a6bde54fd\") from node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:52:44.712801 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:44.712855 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:44.713290 10 garbagecollector.go:468] \"Processing object\" object=\"job-3451/adopt-release\" objectUID=4d9a0990-390f-4db4-bbdd-2f1c8411ed6c kind=\"Job\" virtual=false\nI0617 04:52:44.716654 10 garbagecollector.go:507] object [batch/v1/Job, namespace: job-3451, name: adopt-release, uid: 4d9a0990-390f-4db4-bbdd-2f1c8411ed6c]'s doesn't have an owner, continue on next item\nI0617 04:52:44.721045 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:44.828525 10 namespace_controller.go:185] Namespace has been deleted emptydir-wrapper-8294\nI0617 04:52:44.918715 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-c7181e8a-5c78-4b96-9bce-4cf33962347e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5972^46c444c9-edf9-11ec-a4a3-4e4408ec2313\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:45.146654 10 namespace_controller.go:185] Namespace has been deleted kubectl-1973\nI0617 04:52:45.203039 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-52358cb8-f735-4dc3-99f8-49ce51d343da\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-3465^52a3809a-edf9-11ec-8ad9-0e2a6bde54fd\") from node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:52:45.203370 10 event.go:294] \"Event occurred\" object=\"volume-expand-3465/pod-de172a27-0d29-46eb-8971-6d31a2bc9a6b\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-52358cb8-f735-4dc3-99f8-49ce51d343da\\\" \"\nE0617 04:52:45.216206 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5972/default: secrets \"default-token-6qq4p\" is forbidden: unable to create new content in namespace provisioning-5972 because it is being terminated\nI0617 04:52:45.391665 10 namespace_controller.go:185] Namespace has been deleted ephemeral-1128\nI0617 04:52:45.429923 10 stateful_set.go:443] StatefulSet has been deleted ephemeral-1128-8237/csi-hostpathplugin\nI0617 04:52:45.429998 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128-8237/csi-hostpathplugin-0\" objectUID=d17c5833-8f8a-488f-9478-b70305fa3ad0 kind=\"Pod\" virtual=false\nI0617 04:52:45.430223 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1128-8237/csi-hostpathplugin-779bd9f645\" objectUID=b59b2bb9-06a5-4257-8968-830b0c0fa34e kind=\"ControllerRevision\" virtual=false\nI0617 04:52:45.432387 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1128-8237/csi-hostpathplugin-779bd9f645\" objectUID=b59b2bb9-06a5-4257-8968-830b0c0fa34e kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:52:45.432399 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1128-8237/csi-hostpathplugin-0\" objectUID=d17c5833-8f8a-488f-9478-b70305fa3ad0 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:45.444120 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"gc-3539/simpletest.rc\" need=100 creating=100\nI0617 04:52:45.457662 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-czwb8\"\nI0617 04:52:45.467996 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-7vdcb\"\nI0617 04:52:45.471315 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-m4q6q\"\nI0617 04:52:45.482847 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-h89sb\"\nI0617 04:52:45.486319 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-gx4mb\"\nI0617 04:52:45.486542 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-ltpm5\"\nI0617 04:52:45.486997 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-rlg8v\"\nI0617 04:52:45.508855 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-ql44k\"\nI0617 04:52:45.508914 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-jfsmd\"\nI0617 04:52:45.508956 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-hfzxt\"\nI0617 04:52:45.508999 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-p4qzj\"\nI0617 04:52:45.509037 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-57lgv\"\nI0617 04:52:45.522505 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:45.537359 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-zbwvb\"\nI0617 04:52:45.537684 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fxqqc\"\nI0617 04:52:45.538004 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-jsm77\"\nI0617 04:52:45.560271 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:45.560670 10 controller_ref_manager.go:239] patching pod job-3451_adopt-release-k944b to remove its controllerRef to batch/v1/Job:adopt-release\nI0617 04:52:45.609266 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-z7nzp\"\nI0617 04:52:45.625696 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-zk4rg\"\nI0617 04:52:45.625781 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-87wcw\"\nI0617 04:52:45.625841 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-dkbnv\"\nI0617 04:52:45.625888 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-8z2rj\"\nI0617 04:52:45.625952 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-bmqhp\"\nI0617 04:52:45.625989 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-vt76d\"\nI0617 04:52:45.626027 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-774j9\"\nI0617 04:52:45.626069 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-wsv8r\"\nI0617 04:52:45.626107 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-tsjh2\"\nI0617 04:52:45.626142 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4l8mm\"\nI0617 04:52:45.626177 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4rhl5\"\nI0617 04:52:45.636848 10 garbagecollector.go:468] \"Processing object\" object=\"job-3451/adopt-release\" objectUID=4d9a0990-390f-4db4-bbdd-2f1c8411ed6c kind=\"Job\" virtual=false\nI0617 04:52:45.637099 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:45.652519 10 namespace_controller.go:185] Namespace has been deleted nettest-4398\nI0617 04:52:45.658547 10 event.go:294] \"Event occurred\" object=\"job-3451/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-czmxt\"\nI0617 04:52:45.663091 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-blf7r\"\nI0617 04:52:45.666206 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:45.679157 10 garbagecollector.go:507] object [batch/v1/Job, namespace: job-3451, name: adopt-release, uid: 4d9a0990-390f-4db4-bbdd-2f1c8411ed6c]'s doesn't have an owner, continue on next item\nI0617 04:52:45.679340 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-5vfhz\"\nI0617 04:52:45.690065 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:45.693952 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:45.710821 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nI0617 04:52:45.714943 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-54vmz\"\nI0617 04:52:45.754192 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-c9fw9\"\nE0617 04:52:45.846546 10 tokens_controller.go:262] error synchronizing serviceaccount webhook-3815/default: secrets \"default-token-7bt4p\" is forbidden: unable to create new content in namespace webhook-3815 because it is being terminated\nI0617 04:52:45.847812 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-m6qn4\"\nI0617 04:52:45.901166 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-7qvtc\"\nI0617 04:52:45.949170 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-hf785\"\nE0617 04:52:45.972681 10 tokens_controller.go:262] error synchronizing serviceaccount webhook-3815-markers/default: secrets \"default-token-dmd8c\" is forbidden: unable to create new content in namespace webhook-3815-markers because it is being terminated\nI0617 04:52:45.984386 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00b2aeffaa7f9be55\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:45.984621 10 event.go:294] \"Event occurred\" object=\"ephemeral-9508/inline-volume-tester-4cjrx\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\\\" \"\nI0617 04:52:46.001415 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-msjgk\"\nI0617 04:52:46.022313 10 garbagecollector.go:468] \"Processing object\" object=\"endpointslice-5606/example-empty-selector-j7b95\" objectUID=729b1083-109e-46f6-bdde-c958848a12ba kind=\"EndpointSlice\" virtual=false\nI0617 04:52:46.027024 10 garbagecollector.go:580] \"Deleting object\" object=\"endpointslice-5606/example-empty-selector-j7b95\" objectUID=729b1083-109e-46f6-bdde-c958848a12ba kind=\"EndpointSlice\" propagationPolicy=Background\nI0617 04:52:46.049026 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-tzh55\"\nI0617 04:52:46.097803 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-kghpm\"\nI0617 04:52:46.148195 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-rlbb6\"\nI0617 04:52:46.197591 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-qcj5c\"\nI0617 04:52:46.249416 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-f4dtw\"\nI0617 04:52:46.299069 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-g8rjq\"\nI0617 04:52:46.348747 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-prkbf\"\nI0617 04:52:46.393609 10 namespace_controller.go:185] Namespace has been deleted apply-4465\nI0617 04:52:46.398763 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fl7qp\"\nI0617 04:52:46.450133 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-jk7q9\"\nI0617 04:52:46.497968 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-mxvmr\"\nI0617 04:52:46.546976 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-blzsm\"\nI0617 04:52:46.597405 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-pbkcb\"\nI0617 04:52:46.602940 10 namespace_controller.go:185] Namespace has been deleted container-probe-7639\nI0617 04:52:46.651496 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-j55bv\"\nI0617 04:52:46.698400 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-2zxpv\"\nE0617 04:52:46.721111 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nI0617 04:52:46.754580 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4bhcc\"\nI0617 04:52:46.794824 10 request.go:665] Waited for 1.040021018s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1/api/v1/namespaces/gc-3539/pods\nI0617 04:52:46.797809 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-dffcs\"\nI0617 04:52:46.849387 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-p7mqm\"\nI0617 04:52:46.897875 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-nj958\"\nI0617 04:52:46.912392 10 garbagecollector.go:468] \"Processing object\" object=\"container-probe-5859/busybox-cbfd1cbf-bf97-449c-b2da-f804cabe0ab6\" objectUID=8e43ee95-862d-45b9-90bb-e21bf9638c81 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:46.930959 10 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-5859/busybox-cbfd1cbf-bf97-449c-b2da-f804cabe0ab6\" objectUID=8e43ee95-862d-45b9-90bb-e21bf9638c81 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:46.947442 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-dqz5g\"\nI0617 04:52:47.002040 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-dnbc9\"\nI0617 04:52:47.049176 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4ptw8\"\nI0617 04:52:47.097784 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fdhf7\"\nI0617 04:52:47.148462 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-566qf\"\nI0617 04:52:47.198501 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-ddddp\"\nI0617 04:52:47.265178 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-wcdw7\"\nI0617 04:52:47.301943 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-wgqt5\"\nI0617 04:52:47.305979 10 pv_controller.go:887] volume \"local-pv52qmv\" entered phase \"Available\"\nI0617 04:52:47.348054 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-474qc\"\nI0617 04:52:47.398223 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-lzrr2\"\nI0617 04:52:47.410447 10 pv_controller.go:938] claim \"persistent-local-volumes-test-1947/pvc-t9nn7\" bound to volume \"local-pv52qmv\"\nI0617 04:52:47.423035 10 pv_controller.go:887] volume \"local-pv52qmv\" entered phase \"Bound\"\nI0617 04:52:47.423311 10 pv_controller.go:990] volume \"local-pv52qmv\" bound to claim \"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:52:47.452499 10 pv_controller.go:831] claim \"persistent-local-volumes-test-1947/pvc-t9nn7\" entered phase \"Bound\"\nE0617 04:52:47.454235 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-66/inline-volume-xdj9m-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:52:47.454623 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-xdj9m-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:52:47.500122 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-899qg\"\nI0617 04:52:47.548465 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-ztj66\"\nI0617 04:52:47.598625 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-sbd4h\"\nI0617 04:52:47.649261 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-qtcwt\"\nI0617 04:52:47.698096 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fmzbk\"\nI0617 04:52:47.744776 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-66, name: inline-volume-xdj9m, uid: de08bdb3-f0ca-4941-952d-e49ccbd62352] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:47.745040 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-xdj9m-my-volume\" objectUID=c6234c40-8701-4e79-88fa-cf04cdf74c28 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:47.745563 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-xdj9m\" objectUID=de08bdb3-f0ca-4941-952d-e49ccbd62352 kind=\"Pod\" virtual=false\nI0617 04:52:47.750931 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-n9knn\"\nI0617 04:52:47.751664 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-66, name: inline-volume-xdj9m-my-volume, uid: c6234c40-8701-4e79-88fa-cf04cdf74c28] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-66, name: inline-volume-xdj9m, uid: de08bdb3-f0ca-4941-952d-e49ccbd62352] is deletingDependents\nI0617 04:52:47.755555 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-66/inline-volume-xdj9m-my-volume\" objectUID=c6234c40-8701-4e79-88fa-cf04cdf74c28 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0617 04:52:47.760484 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-66/inline-volume-xdj9m-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:52:47.760955 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-xdj9m-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:52:47.761033 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-xdj9m-my-volume\" objectUID=c6234c40-8701-4e79-88fa-cf04cdf74c28 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:52:47.762953 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-66/inline-volume-xdj9m-my-volume\"\nI0617 04:52:47.766778 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-xdj9m\" objectUID=de08bdb3-f0ca-4941-952d-e49ccbd62352 kind=\"Pod\" virtual=false\nI0617 04:52:47.768333 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-66, name: inline-volume-xdj9m, uid: de08bdb3-f0ca-4941-952d-e49ccbd62352]\nI0617 04:52:47.798036 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-8sn4w\"\nI0617 04:52:47.848385 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4mmp9\"\nI0617 04:52:47.897874 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-skzbw\"\nI0617 04:52:47.947860 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-hxxff\"\nI0617 04:52:47.998363 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-hrwb6\"\nI0617 04:52:48.049605 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-wlxs9\"\nI0617 04:52:48.079951 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-tester-8zqxs-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-8zqxs to be scheduled\"\nI0617 04:52:48.097536 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-5dbq5\"\nI0617 04:52:48.148659 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-75khr\"\nI0617 04:52:48.213741 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-x5mcc\"\nI0617 04:52:48.265808 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-tn9jw\"\nI0617 04:52:48.314768 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-bxmkd\"\nI0617 04:52:48.361424 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-xll6w\"\nI0617 04:52:48.406607 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-w7r9z\"\nE0617 04:52:48.423712 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1437/default: secrets \"default-token-c5t44\" is forbidden: unable to create new content in namespace provisioning-1437 because it is being terminated\nI0617 04:52:48.445096 10 request.go:665] Waited for 1.046115666s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1/api/v1/namespaces/gc-3539/pods\nI0617 04:52:48.450072 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-cf55j\"\nI0617 04:52:48.500079 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-rp88l\"\nI0617 04:52:48.546957 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-x54jc\"\nI0617 04:52:48.598722 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-48slh\"\nI0617 04:52:48.648964 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-z2tx5\"\nI0617 04:52:48.697485 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-bzg4l\"\nI0617 04:52:48.748293 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-z86rr\"\nI0617 04:52:48.798976 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-5wmbb\"\nI0617 04:52:48.848175 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4zz7q\"\nI0617 04:52:48.899897 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-7smtf\"\nI0617 04:52:48.947300 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-cdpjj\"\nI0617 04:52:48.997992 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-dncwr\"\nI0617 04:52:49.047768 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4qmjk\"\nI0617 04:52:49.098388 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-92w6l\"\nI0617 04:52:49.149824 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-nn5ct\"\nI0617 04:52:49.201259 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-s5bb4\"\nI0617 04:52:49.251326 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-8ccj2\"\nI0617 04:52:49.299668 10 event.go:294] \"Event occurred\" object=\"gc-3539/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-89ddx\"\nI0617 04:52:49.643485 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0617 04:52:49.675820 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-tester-8zqxs-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:52:49.746664 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0617 04:52:49.863362 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0617 04:52:50.335418 10 namespace_controller.go:185] Namespace has been deleted provisioning-5972\nI0617 04:52:50.397782 10 garbagecollector.go:468] \"Processing object\" object=\"provisioning-5972-2646/csi-hostpathplugin-5f87ff4858\" objectUID=60575360-4bac-4ad2-b277-6571dfa276ee kind=\"ControllerRevision\" virtual=false\nI0617 04:52:50.398113 10 stateful_set.go:443] StatefulSet has been deleted provisioning-5972-2646/csi-hostpathplugin\nI0617 04:52:50.398201 10 garbagecollector.go:468] \"Processing object\" object=\"provisioning-5972-2646/csi-hostpathplugin-0\" objectUID=384573da-faa7-428b-ba65-913c5238edbe kind=\"Pod\" virtual=false\nI0617 04:52:50.400147 10 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5972-2646/csi-hostpathplugin-0\" objectUID=384573da-faa7-428b-ba65-913c5238edbe kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.400147 10 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5972-2646/csi-hostpathplugin-5f87ff4858\" objectUID=60575360-4bac-4ad2-b277-6571dfa276ee kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:52:50.653397 10 graph_builder.go:587] add [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:50.653734 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-wlxs9\" objectUID=4b2a6084-f9ee-4d7d-abe8-6b8c4071417e kind=\"Pod\" virtual=false\nI0617 04:52:50.654427 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-5wmbb\" objectUID=4897a601-9d32-4170-a7ce-31aa6e291131 kind=\"Pod\" virtual=false\nI0617 04:52:50.654549 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dncwr\" objectUID=6e595c22-a9d2-4f12-af41-00acc8812156 kind=\"Pod\" virtual=false\nI0617 04:52:50.654672 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jsm77\" objectUID=f696e6b6-4d18-4e97-b16f-c38fd097b9c6 kind=\"Pod\" virtual=false\nI0617 04:52:50.654772 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-z7nzp\" objectUID=ee38876d-bbf6-403b-ba2c-d11e86f11a43 kind=\"Pod\" virtual=false\nI0617 04:52:50.654862 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-blf7r\" objectUID=cff3cb81-5972-4ca0-933b-53cd66e1eccb kind=\"Pod\" virtual=false\nI0617 04:52:50.654942 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hf785\" objectUID=b4796dd0-1545-4fb4-bd44-c592ce29c6b7 kind=\"Pod\" virtual=false\nI0617 04:52:50.655037 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-sbd4h\" objectUID=2b6e7e88-1175-489b-a84c-fc1fd68c95ca kind=\"Pod\" virtual=false\nI0617 04:52:50.655117 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-x5mcc\" objectUID=2685a0ed-0502-4d08-9bcb-1a146dc99ffc kind=\"Pod\" virtual=false\nI0617 04:52:50.655211 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tn9jw\" objectUID=a3bd8f64-4de9-4dda-ade0-e20b38f7bcb1 kind=\"Pod\" virtual=false\nI0617 04:52:50.655295 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-nn5ct\" objectUID=29ba00cc-82ba-4e5a-beb0-1108d4a33b9d kind=\"Pod\" virtual=false\nI0617 04:52:50.655373 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-8z2rj\" objectUID=da6a68a1-ea71-4676-ba77-be0b23eefad2 kind=\"Pod\" virtual=false\nI0617 04:52:50.655463 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-54vmz\" objectUID=3f53418b-3cb1-4579-bceb-0fb37c989155 kind=\"Pod\" virtual=false\nI0617 04:52:50.655544 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-pbkcb\" objectUID=82696761-6578-4e93-a9ce-914c66d1f1c4 kind=\"Pod\" virtual=false\nI0617 04:52:50.655636 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-nj958\" objectUID=d7088b9f-bb42-44a4-9f6f-91cddc09ef0c kind=\"Pod\" virtual=false\nI0617 04:52:50.655716 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zbwvb\" objectUID=a37d9f25-5a2f-426f-9237-17b7b08b4c3e kind=\"Pod\" virtual=false\nI0617 04:52:50.655791 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-5vfhz\" objectUID=a3380613-733c-41d4-be7a-e90dbc50a7ce kind=\"Pod\" virtual=false\nI0617 04:52:50.655883 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-qtcwt\" objectUID=e1c8e761-b75b-4a20-8a56-f9116ba1f3a3 kind=\"Pod\" virtual=false\nI0617 04:52:50.655961 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-czwb8\" objectUID=5ebbdd2c-6727-4a09-be45-956af68a0f1e kind=\"Pod\" virtual=false\nI0617 04:52:50.656049 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7vdcb\" objectUID=6a2ca7c0-9c43-456c-bfaa-d375ba6de684 kind=\"Pod\" virtual=false\nI0617 04:52:50.664773 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-wlxs9\" objectUID=4b2a6084-f9ee-4d7d-abe8-6b8c4071417e kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.689537 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-pbkcb\" objectUID=82696761-6578-4e93-a9ce-914c66d1f1c4 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.689798 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hfzxt\" objectUID=794f3671-22fc-4367-85ae-ad31b9e552ac kind=\"Pod\" virtual=false\nI0617 04:52:50.690733 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-7vdcb, uid: 6a2ca7c0-9c43-456c-bfaa-d375ba6de684] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:50.690828 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-zbwvb, uid: a37d9f25-5a2f-426f-9237-17b7b08b4c3e] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:50.690904 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-qtcwt\" objectUID=e1c8e761-b75b-4a20-8a56-f9116ba1f3a3 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.691014 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-nj958\" objectUID=d7088b9f-bb42-44a4-9f6f-91cddc09ef0c kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.691081 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-5wmbb\" objectUID=4897a601-9d32-4170-a7ce-31aa6e291131 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.691139 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-czwb8, uid: 5ebbdd2c-6727-4a09-be45-956af68a0f1e] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:50.691217 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-sbd4h\" objectUID=2b6e7e88-1175-489b-a84c-fc1fd68c95ca kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.691376 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-hf785, uid: b4796dd0-1545-4fb4-bd44-c592ce29c6b7] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:50.706333 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-z7nzp\" objectUID=ee38876d-bbf6-403b-ba2c-d11e86f11a43 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.730776 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-jsm77, uid: f696e6b6-4d18-4e97-b16f-c38fd097b9c6] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:50.757755 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-dncwr\" objectUID=6e595c22-a9d2-4f12-af41-00acc8812156 kind=\"Pod\" propagationPolicy=Background\nE0617 04:52:50.762010 10 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-1128-8237/default: secrets \"default-token-x9xg6\" is forbidden: unable to create new content in namespace ephemeral-1128-8237 because it is being terminated\nI0617 04:52:50.783619 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-5vfhz, uid: a3380613-733c-41d4-be7a-e90dbc50a7ce] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:50.805554 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-54vmz, uid: 3f53418b-3cb1-4579-bceb-0fb37c989155] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:50.833795 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-8z2rj\" objectUID=da6a68a1-ea71-4676-ba77-be0b23eefad2 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.856189 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-x5mcc\" objectUID=2685a0ed-0502-4d08-9bcb-1a146dc99ffc kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.880718 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-blf7r\" objectUID=cff3cb81-5972-4ca0-933b-53cd66e1eccb kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.905376 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-nn5ct\" objectUID=29ba00cc-82ba-4e5a-beb0-1108d4a33b9d kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.939402 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-tn9jw\" objectUID=a3bd8f64-4de9-4dda-ade0-e20b38f7bcb1 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:50.960758 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-57lgv\" objectUID=b9f35bb0-ab9a-44f7-9f23-f2c4d3cc6d44 kind=\"Pod\" virtual=false\nI0617 04:52:51.008781 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-7vdcb, uid: 6a2ca7c0-9c43-456c-bfaa-d375ba6de684] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.010136 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-s5bb4\" objectUID=67dae16c-eb63-47d1-afb7-96f87797373e kind=\"Pod\" virtual=false\nE0617 04:52:51.031605 10 tokens_controller.go:262] error synchronizing serviceaccount job-3451/default: secrets \"default-token-2v9hp\" is forbidden: unable to create new content in namespace job-3451 because it is being terminated\nI0617 04:52:51.041601 10 namespace_controller.go:185] Namespace has been deleted webhook-3815\nI0617 04:52:51.042124 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-8ccj2\" objectUID=57c7e3b9-c369-49b1-b0d3-885070e9a656 kind=\"Pod\" virtual=false\nI0617 04:52:51.045778 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-zbwvb, uid: a37d9f25-5a2f-426f-9237-17b7b08b4c3e] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.064000 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ltpm5\" objectUID=015f1b66-589c-478f-9d27-338071996cbf kind=\"Pod\" virtual=false\nI0617 04:52:51.076711 10 job_controller.go:498] enqueueing job job-3451/adopt-release\nE0617 04:52:51.077020 10 tracking_utils.go:109] \"deleting tracking annotation UID expectations\" err=\"couldn't create key for object job-3451/adopt-release: could not find key for obj \\\"job-3451/adopt-release\\\"\" job=\"job-3451/adopt-release\"\nI0617 04:52:51.087272 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-m6qn4\" objectUID=d1589ee0-6d1f-4f8c-b1be-6fd27c3b8766 kind=\"Pod\" virtual=false\nI0617 04:52:51.116260 10 namespace_controller.go:185] Namespace has been deleted webhook-3815-markers\nI0617 04:52:51.118683 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-wgqt5\" objectUID=e3d9b288-c6f8-4798-bd64-213c497b4be5 kind=\"Pod\" virtual=false\nI0617 04:52:51.134037 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-czwb8, uid: 5ebbdd2c-6727-4a09-be45-956af68a0f1e] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.135821 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-92w6l\" objectUID=5d7a2cd5-0ef4-4439-934b-4c72da4107ea kind=\"Pod\" virtual=false\nI0617 04:52:51.160442 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fdhf7\" objectUID=e1a28f6d-ea84-4503-8b79-9d465661d90f kind=\"Pod\" virtual=false\nI0617 04:52:51.182749 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-wcdw7\" objectUID=f68afa4d-132e-4c8d-827d-fcb14917d39e kind=\"Pod\" virtual=false\nI0617 04:52:51.183106 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-hf785, uid: b4796dd0-1545-4fb4-bd44-c592ce29c6b7] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.212498 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-8sn4w\" objectUID=91dd29aa-24d0-4cf5-a8e5-bc179eb7ac21 kind=\"Pod\" virtual=false\nI0617 04:52:51.231255 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4mmp9\" objectUID=3e84cdb5-3624-434b-b08a-d8a115d26e20 kind=\"Pod\" virtual=false\nI0617 04:52:51.232321 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-jsm77, uid: f696e6b6-4d18-4e97-b16f-c38fd097b9c6] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.260300 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dqz5g\" objectUID=de7121d3-78d0-4d19-8ad1-e92e177f59e4 kind=\"Pod\" virtual=false\nI0617 04:52:51.288182 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-5vfhz, uid: a3380613-733c-41d4-be7a-e90dbc50a7ce] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.288835 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hrwb6\" objectUID=37c15b7e-79ae-4199-842d-8cf473c5613e kind=\"Pod\" virtual=false\nI0617 04:52:51.306280 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-cf55j\" objectUID=b82b6171-83d9-476f-916d-a84b4032030c kind=\"Pod\" virtual=false\nI0617 04:52:51.306815 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-54vmz, uid: 3f53418b-3cb1-4579-bceb-0fb37c989155] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.335479 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-z2tx5\" objectUID=a458ccd2-cda4-4030-bc9a-6a2de5d36591 kind=\"Pod\" virtual=false\nI0617 04:52:51.362241 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4rhl5\" objectUID=a37738b7-d462-4715-9991-2c6385700bd3 kind=\"Pod\" virtual=false\nW0617 04:52:51.370067 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:51.370088 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:51.388098 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-msjgk\" objectUID=bb523d2f-87c6-4a82-a58a-5dd7c0ce87bf kind=\"Pod\" virtual=false\nI0617 04:52:51.410326 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rlbb6\" objectUID=d8087078-d495-4979-8f9f-f56e6b1c64c0 kind=\"Pod\" virtual=false\nI0617 04:52:51.434707 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-j55bv\" objectUID=e87c48b3-82a4-4540-87e9-ea7af0d641ac kind=\"Pod\" virtual=false\nI0617 04:52:51.483426 10 namespace_controller.go:185] Namespace has been deleted apply-997\nI0617 04:52:51.484768 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-hfzxt, uid: 794f3671-22fc-4367-85ae-ad31b9e552ac] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:51.955683 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-57lgv, uid: b9f35bb0-ab9a-44f7-9f23-f2c4d3cc6d44] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:51.983503 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-hfzxt, uid: 794f3671-22fc-4367-85ae-ad31b9e552ac] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:51.984217 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-n9knn\" objectUID=9a6d1fd5-5190-46bb-b019-01a2a54cdfde kind=\"Pod\" virtual=false\nI0617 04:52:52.006818 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-s5bb4\" objectUID=67dae16c-eb63-47d1-afb7-96f87797373e kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.046204 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-8ccj2\" objectUID=57c7e3b9-c369-49b1-b0d3-885070e9a656 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.058060 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-ltpm5, uid: 015f1b66-589c-478f-9d27-338071996cbf] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:52.080551 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-m6qn4\" objectUID=d1589ee0-6d1f-4f8c-b1be-6fd27c3b8766 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.106392 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-wgqt5\" objectUID=e3d9b288-c6f8-4798-bd64-213c497b4be5 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.130958 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-92w6l\" objectUID=5d7a2cd5-0ef4-4439-934b-4c72da4107ea kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.155785 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-fdhf7\" objectUID=e1a28f6d-ea84-4503-8b79-9d465661d90f kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.182181 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-wcdw7\" objectUID=f68afa4d-132e-4c8d-827d-fcb14917d39e kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.206025 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-8sn4w\" objectUID=91dd29aa-24d0-4cf5-a8e5-bc179eb7ac21 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.231691 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-4mmp9\" objectUID=3e84cdb5-3624-434b-b08a-d8a115d26e20 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.255780 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-dqz5g\" objectUID=de7121d3-78d0-4d19-8ad1-e92e177f59e4 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.280399 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-hrwb6\" objectUID=37c15b7e-79ae-4199-842d-8cf473c5613e kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.306152 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-cf55j\" objectUID=b82b6171-83d9-476f-916d-a84b4032030c kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.330682 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-z2tx5\" objectUID=a458ccd2-cda4-4030-bc9a-6a2de5d36591 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:52.355541 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-4rhl5, uid: a37738b7-d462-4715-9991-2c6385700bd3] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:52.380825 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-msjgk, uid: bb523d2f-87c6-4a82-a58a-5dd7c0ce87bf] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:52.406280 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-rlbb6, uid: d8087078-d495-4979-8f9f-f56e6b1c64c0] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:52.432026 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-j55bv, uid: e87c48b3-82a4-4540-87e9-ea7af0d641ac] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:52.458659 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-57lgv, uid: b9f35bb0-ab9a-44f7-9f23-f2c4d3cc6d44] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:52.459372 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-5dbq5\" objectUID=0e2b814d-ca8d-4742-a3ab-11bb8e82faad kind=\"Pod\" virtual=false\nI0617 04:52:52.510123 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-z86rr\" objectUID=b49ea2c0-8398-4116-8998-4ebeacb30811 kind=\"Pod\" virtual=false\nI0617 04:52:52.537282 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4qmjk\" objectUID=9b73694b-d3c6-40e6-9008-0dd1edaaaf2a kind=\"Pod\" virtual=false\nI0617 04:52:52.556427 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-ltpm5, uid: 015f1b66-589c-478f-9d27-338071996cbf] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:52.557332 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-h89sb\" objectUID=fa012247-272e-43df-98e6-327cb1681f1a kind=\"Pod\" virtual=false\nI0617 04:52:52.587421 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7qvtc\" objectUID=ad241a3a-3059-49fc-808f-15261524080b kind=\"Pod\" virtual=false\nI0617 04:52:52.612822 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-f4dtw\" objectUID=623e854b-3398-4d21-8e73-68893abd195f kind=\"Pod\" virtual=false\nI0617 04:52:52.637394 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-blzsm\" objectUID=f7f6162d-91e3-4904-b50b-e8e906219dd5 kind=\"Pod\" virtual=false\nI0617 04:52:52.660263 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-x54jc\" objectUID=6f3b8581-bbcc-4e7c-b320-c5b5ecbaa714 kind=\"Pod\" virtual=false\nI0617 04:52:52.685406 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-wsv8r\" objectUID=a6b102de-d5b8-44d5-ac6f-564d9e833867 kind=\"Pod\" virtual=false\nI0617 04:52:52.711928 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tsjh2\" objectUID=1c3c74e7-93c6-4854-8f71-cca4ee3d21b1 kind=\"Pod\" virtual=false\nI0617 04:52:52.736468 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zk4rg\" objectUID=befab5aa-d1bc-4ad8-9226-07cc01ef82a7 kind=\"Pod\" virtual=false\nI0617 04:52:52.761530 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-g8rjq\" objectUID=533370a9-6d2b-4ad6-8e84-281e7eb8f4bb kind=\"Pod\" virtual=false\nI0617 04:52:52.788598 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-mxvmr\" objectUID=365017ba-fa08-4ba2-8447-330c71da75e6 kind=\"Pod\" virtual=false\nI0617 04:52:52.811729 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-lzrr2\" objectUID=bcb301a6-b52c-455e-bcbf-688bb6ac4703 kind=\"Pod\" virtual=false\nI0617 04:52:52.838177 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-xll6w\" objectUID=513cc462-3c5b-40db-9cc1-2f6c3f3304f0 kind=\"Pod\" virtual=false\nI0617 04:52:52.857329 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-4rhl5, uid: a37738b7-d462-4715-9991-2c6385700bd3] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:52.858047 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4zz7q\" objectUID=efbd4ba8-ac45-42f6-9162-b6d7542dcb94 kind=\"Pod\" virtual=false\nI0617 04:52:52.882577 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-msjgk, uid: bb523d2f-87c6-4a82-a58a-5dd7c0ce87bf] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:52.883803 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jfsmd\" objectUID=119666a8-1157-49a3-bd42-6312c6674f92 kind=\"Pod\" virtual=false\nI0617 04:52:52.906767 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-rlbb6, uid: d8087078-d495-4979-8f9f-f56e6b1c64c0] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:52.907692 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fxqqc\" objectUID=77fa0d6b-0945-4d28-9be9-c7397873311b kind=\"Pod\" virtual=false\nI0617 04:52:52.931375 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-j55bv, uid: e87c48b3-82a4-4540-87e9-ea7af0d641ac] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:52.931868 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-87wcw\" objectUID=ddb12427-8ea7-46f0-bb6c-55636e98f16d kind=\"Pod\" virtual=false\nI0617 04:52:52.980249 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-n9knn\" objectUID=9a6d1fd5-5190-46bb-b019-01a2a54cdfde kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.271680 10 pv_controller.go:887] volume \"pvc-dad1b290-fd76-4721-aff0-7992699a4116\" entered phase \"Bound\"\nI0617 04:52:53.271943 10 pv_controller.go:990] volume \"pvc-dad1b290-fd76-4721-aff0-7992699a4116\" bound to claim \"ephemeral-66/inline-volume-tester-8zqxs-my-volume-0\"\nI0617 04:52:53.284843 10 pv_controller.go:831] claim \"ephemeral-66/inline-volume-tester-8zqxs-my-volume-0\" entered phase \"Bound\"\nI0617 04:52:53.442657 10 namespace_controller.go:185] Namespace has been deleted provisioning-1437\nI0617 04:52:53.455402 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-5dbq5\" objectUID=0e2b814d-ca8d-4742-a3ab-11bb8e82faad kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.491554 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jk7q9\" objectUID=9ce6c704-acc2-459b-819d-5f6fa58f5ab2 kind=\"Pod\" virtual=false\nI0617 04:52:53.505831 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-z86rr\" objectUID=b49ea2c0-8398-4116-8998-4ebeacb30811 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.530694 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-4qmjk\" objectUID=9b73694b-d3c6-40e6-9008-0dd1edaaaf2a kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.556441 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-h89sb, uid: fa012247-272e-43df-98e6-327cb1681f1a] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.580674 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-7qvtc, uid: ad241a3a-3059-49fc-808f-15261524080b] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.605443 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-f4dtw\" objectUID=623e854b-3398-4d21-8e73-68893abd195f kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.630203 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-blzsm, uid: f7f6162d-91e3-4904-b50b-e8e906219dd5] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.657104 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-x54jc\" objectUID=6f3b8581-bbcc-4e7c-b320-c5b5ecbaa714 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.682703 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-wsv8r, uid: a6b102de-d5b8-44d5-ac6f-564d9e833867] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.705550 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-tsjh2, uid: 1c3c74e7-93c6-4854-8f71-cca4ee3d21b1] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.730239 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-zk4rg, uid: befab5aa-d1bc-4ad8-9226-07cc01ef82a7] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.755762 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-g8rjq, uid: 533370a9-6d2b-4ad6-8e84-281e7eb8f4bb] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.760136 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-dad1b290-fd76-4721-aff0-7992699a4116\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-048bc8835a2adcf11\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:53.780435 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-mxvmr, uid: 365017ba-fa08-4ba2-8447-330c71da75e6] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.805486 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-lzrr2\" objectUID=bcb301a6-b52c-455e-bcbf-688bb6ac4703 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.830366 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-xll6w\" objectUID=513cc462-3c5b-40db-9cc1-2f6c3f3304f0 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.855417 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-4zz7q\" objectUID=efbd4ba8-ac45-42f6-9162-b6d7542dcb94 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:53.881231 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-jfsmd, uid: 119666a8-1157-49a3-bd42-6312c6674f92] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.905757 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-fxqqc, uid: 77fa0d6b-0945-4d28-9be9-c7397873311b] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.930280 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-87wcw, uid: ddb12427-8ea7-46f0-bb6c-55636e98f16d] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:53.962874 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ztj66\" objectUID=1fba4976-b789-4bc4-aede-90de1c357ee6 kind=\"Pod\" virtual=false\nI0617 04:52:54.012161 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-bxmkd\" objectUID=9709fc7b-c479-4753-b27e-4b17c07242ee kind=\"Pod\" virtual=false\nI0617 04:52:54.040398 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-cdpjj\" objectUID=ec8f7b15-cdef-4987-9b58-0ee8df30e058 kind=\"Pod\" virtual=false\nI0617 04:52:54.057762 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-h89sb, uid: fa012247-272e-43df-98e6-327cb1681f1a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.058807 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rlg8v\" objectUID=ace6e31b-61d8-4980-b2c4-b6bd56bbbb87 kind=\"Pod\" virtual=false\nI0617 04:52:54.083070 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fl7qp\" objectUID=ee9a50ef-38f3-4fd6-bf66-bacfe166c582 kind=\"Pod\" virtual=false\nI0617 04:52:54.083264 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-7qvtc, uid: ad241a3a-3059-49fc-808f-15261524080b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.113875 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rp88l\" objectUID=c8d53d03-642d-41bd-aa59-1f442a868668 kind=\"Pod\" virtual=false\nE0617 04:52:54.128516 10 tokens_controller.go:262] error synchronizing serviceaccount podtemplate-9707/default: secrets \"default-token-cjzjj\" is forbidden: unable to create new content in namespace podtemplate-9707 because it is being terminated\nI0617 04:52:54.145390 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-blzsm, uid: f7f6162d-91e3-4904-b50b-e8e906219dd5] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.146273 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-48slh\" objectUID=7f69a032-1006-4b13-a68e-0aa9280988d4 kind=\"Pod\" virtual=false\nI0617 04:52:54.162191 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-bmqhp\" objectUID=f89f6dd9-08f5-44da-a789-2569635a88d9 kind=\"Pod\" virtual=false\nI0617 04:52:54.186584 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-wsv8r, uid: a6b102de-d5b8-44d5-ac6f-564d9e833867] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.187306 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-2zxpv\" objectUID=543eb332-ccd4-45c8-9dad-bf5ffcf50e4b kind=\"Pod\" virtual=false\nI0617 04:52:54.207090 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-tsjh2, uid: 1c3c74e7-93c6-4854-8f71-cca4ee3d21b1] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.207831 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dnbc9\" objectUID=937b292b-2519-41d6-a1c4-e8800547b5b7 kind=\"Pod\" virtual=false\nI0617 04:52:54.236064 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-zk4rg, uid: befab5aa-d1bc-4ad8-9226-07cc01ef82a7] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.238983 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-474qc\" objectUID=605dcc17-ed15-44af-bb2f-7f8a3bc6942e kind=\"Pod\" virtual=false\nI0617 04:52:54.257409 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-g8rjq, uid: 533370a9-6d2b-4ad6-8e84-281e7eb8f4bb] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.257682 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hxxff\" objectUID=7c3ba075-8e00-4595-8bd2-99f719b6afff kind=\"Pod\" virtual=false\nI0617 04:52:54.281547 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-w7r9z\" objectUID=0cf2d03c-c92b-48c2-a09e-62fbce34350a kind=\"Pod\" virtual=false\nI0617 04:52:54.282863 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-mxvmr, uid: 365017ba-fa08-4ba2-8447-330c71da75e6] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.312073 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-bzg4l\" objectUID=d18e671e-4846-47c4-899b-d79b08088297 kind=\"Pod\" virtual=false\nI0617 04:52:54.335307 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-89ddx\" objectUID=981b8603-0b33-4402-81c1-0c4c936dbfa1 kind=\"Pod\" virtual=false\nI0617 04:52:54.362483 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4l8mm\" objectUID=f5c5c04d-e606-4f41-8cf2-c95c31ef0d43 kind=\"Pod\" virtual=false\nI0617 04:52:54.384759 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-jfsmd, uid: 119666a8-1157-49a3-bd42-6312c6674f92] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.385342 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4ptw8\" objectUID=d0045bbf-3ac3-4d81-9550-b7e0bc7edacf kind=\"Pod\" virtual=false\nI0617 04:52:54.409667 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-566qf\" objectUID=2d9a676e-ddf2-455d-84eb-1784833dd6d0 kind=\"Pod\" virtual=false\nI0617 04:52:54.410040 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-fxqqc, uid: 77fa0d6b-0945-4d28-9be9-c7397873311b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.433337 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-899qg\" objectUID=530d786d-f133-4c3d-ba98-76d41e65214a kind=\"Pod\" virtual=false\nI0617 04:52:54.433850 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-87wcw, uid: ddb12427-8ea7-46f0-bb6c-55636e98f16d] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:54.480339 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-jk7q9, uid: 9ce6c704-acc2-459b-819d-5f6fa58f5ab2] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:54.554033 10 namespace_controller.go:185] Namespace has been deleted netpol-3359\nI0617 04:52:54.965383 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-ztj66, uid: 1fba4976-b789-4bc4-aede-90de1c357ee6] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:54.975853 10 namespace_controller.go:185] Namespace has been deleted emptydir-6014\nI0617 04:52:55.007906 10 namespace_controller.go:185] Namespace has been deleted configmap-7147\nI0617 04:52:55.029929 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ddddp\" objectUID=2b8a304a-080e-4c9d-852a-58ddb2c7aaf6 kind=\"Pod\" virtual=false\nI0617 04:52:55.031305 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-bxmkd\" objectUID=9709fc7b-c479-4753-b27e-4b17c07242ee kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.032700 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-jk7q9, uid: 9ce6c704-acc2-459b-819d-5f6fa58f5ab2] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.036497 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-cdpjj\" objectUID=ec8f7b15-cdef-4987-9b58-0ee8df30e058 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.070239 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-rlg8v, uid: ace6e31b-61d8-4980-b2c4-b6bd56bbbb87] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.080560 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-fl7qp, uid: ee9a50ef-38f3-4fd6-bf66-bacfe166c582] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.105338 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-rp88l\" objectUID=c8d53d03-642d-41bd-aa59-1f442a868668 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.130227 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-48slh\" objectUID=7f69a032-1006-4b13-a68e-0aa9280988d4 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.155953 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-bmqhp, uid: f89f6dd9-08f5-44da-a789-2569635a88d9] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.192480 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-2zxpv, uid: 543eb332-ccd4-45c8-9dad-bf5ffcf50e4b] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.206373 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-dnbc9, uid: 937b292b-2519-41d6-a1c4-e8800547b5b7] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.231604 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-474qc, uid: 605dcc17-ed15-44af-bb2f-7f8a3bc6942e] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nE0617 04:52:55.238100 10 tokens_controller.go:262] error synchronizing serviceaccount pods-3645/default: secrets \"default-token-wc76v\" is forbidden: unable to create new content in namespace pods-3645 because it is being terminated\nI0617 04:52:55.255734 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-hxxff, uid: 7c3ba075-8e00-4595-8bd2-99f719b6afff] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.280273 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-w7r9z\" objectUID=0cf2d03c-c92b-48c2-a09e-62fbce34350a kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.305709 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-bzg4l\" objectUID=d18e671e-4846-47c4-899b-d79b08088297 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.331848 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-89ddx\" objectUID=981b8603-0b33-4402-81c1-0c4c936dbfa1 kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.355582 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-4l8mm, uid: f5c5c04d-e606-4f41-8cf2-c95c31ef0d43] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.384191 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-4ptw8, uid: d0045bbf-3ac3-4d81-9550-b7e0bc7edacf] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.406484 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-566qf, uid: 2d9a676e-ddf2-455d-84eb-1784833dd6d0] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:55.431693 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-899qg\" objectUID=530d786d-f133-4c3d-ba98-76d41e65214a kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:55.459446 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-skzbw\" objectUID=c49354f8-46bb-45f2-bb24-b2f40b3fabbb kind=\"Pod\" virtual=false\nI0617 04:52:55.464480 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-ztj66, uid: 1fba4976-b789-4bc4-aede-90de1c357ee6] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.526754 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-gx4mb\" objectUID=9262c6a8-1a29-4e8c-8d7d-986bc30ef6ff kind=\"Pod\" virtual=false\nI0617 04:52:55.555240 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ql44k\" objectUID=754fa9df-306c-4457-95b9-0d3c976f49b9 kind=\"Pod\" virtual=false\nI0617 04:52:55.569809 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-rlg8v, uid: ace6e31b-61d8-4980-b2c4-b6bd56bbbb87] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.570339 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p4qzj\" objectUID=a0bb8ea6-da1d-4172-98f0-589c44aab8f5 kind=\"Pod\" virtual=false\nI0617 04:52:55.591931 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p7mqm\" objectUID=f7a0c495-51f2-40fd-a0cd-22b2d8240f29 kind=\"Pod\" virtual=false\nI0617 04:52:55.592151 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-fl7qp, uid: ee9a50ef-38f3-4fd6-bf66-bacfe166c582] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.617427 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-prkbf\" objectUID=1b0dcaaf-9bc8-422e-b7f1-19af912edbf1 kind=\"Pod\" virtual=false\nI0617 04:52:55.641865 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4bhcc\" objectUID=f86450e4-432c-4445-8b17-329d98675123 kind=\"Pod\" virtual=false\nE0617 04:52:55.662383 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nI0617 04:52:55.662573 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dffcs\" objectUID=274b4b7b-519a-4017-ab26-e07b213adaa7 kind=\"Pod\" virtual=false\nI0617 04:52:55.662735 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-bmqhp, uid: f89f6dd9-08f5-44da-a789-2569635a88d9] to the attemptToDelete, because it's waiting for its dependents to be deleted\nE0617 04:52:55.663250 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5972-2646/default: secrets \"default-token-ngmms\" is forbidden: unable to create new content in namespace provisioning-5972-2646 because it is being terminated\nE0617 04:52:55.669507 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:55.691135 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-2zxpv, uid: 543eb332-ccd4-45c8-9dad-bf5ffcf50e4b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.691753 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-75khr\" objectUID=130a0b9e-9d3e-43f4-a9e0-6d487780b988 kind=\"Pod\" virtual=false\nI0617 04:52:55.709101 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-dnbc9, uid: 937b292b-2519-41d6-a1c4-e8800547b5b7] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.713143 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-m4q6q\" objectUID=d2602a39-9174-4ea3-b588-ababd6b12023 kind=\"Pod\" virtual=false\nI0617 04:52:55.733266 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-474qc, uid: 605dcc17-ed15-44af-bb2f-7f8a3bc6942e] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.735169 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-774j9\" objectUID=b9ce252d-47b9-40a4-8303-147ea7e5bff9 kind=\"Pod\" virtual=false\nI0617 04:52:55.759461 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-hxxff, uid: 7c3ba075-8e00-4595-8bd2-99f719b6afff] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.760308 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-vt76d\" objectUID=c190f1ec-a64f-44bf-a127-7dc4b677a14f kind=\"Pod\" virtual=false\nI0617 04:52:55.788152 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-c9fw9\" objectUID=012a06ae-ff27-488c-809f-b9bfc8f12b5e kind=\"Pod\" virtual=false\nI0617 04:52:55.818815 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fmzbk\" objectUID=d437840b-4a5e-45dd-b9ad-312ff636e989 kind=\"Pod\" virtual=false\nI0617 04:52:55.838771 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7smtf\" objectUID=c0fc83ef-782c-4c39-9f1b-166ce4dfbf5e kind=\"Pod\" virtual=false\nI0617 04:52:55.858262 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dkbnv\" objectUID=7878bcf7-8b7c-4dde-847f-fc8a3a1fc390 kind=\"Pod\" virtual=false\nI0617 04:52:55.860804 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-4l8mm, uid: f5c5c04d-e606-4f41-8cf2-c95c31ef0d43] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.885257 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-4ptw8, uid: d0045bbf-3ac3-4d81-9550-b7e0bc7edacf] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.885389 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tzh55\" objectUID=859ff11d-a9e4-4233-b2ad-a376886e21ae kind=\"Pod\" virtual=false\nE0617 04:52:55.889277 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:55.908741 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-kghpm\" objectUID=fe3df046-4b1f-4510-9965-7c6e7c73906d kind=\"Pod\" virtual=false\nI0617 04:52:55.909003 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-566qf, uid: 2d9a676e-ddf2-455d-84eb-1784833dd6d0] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:55.961519 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-qcj5c\" objectUID=14191a48-7fa3-441c-a8fb-cdb02b385d58 kind=\"Pod\" virtual=false\nI0617 04:52:55.989186 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-ddddp, uid: 2b8a304a-080e-4c9d-852a-58ddb2c7aaf6] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.009544 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-dad1b290-fd76-4721-aff0-7992699a4116\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-048bc8835a2adcf11\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:52:56.010272 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-tester-8zqxs\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-dad1b290-fd76-4721-aff0-7992699a4116\\\" \"\nE0617 04:52:56.202613 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nE0617 04:52:56.416288 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:56.458242 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-skzbw\" objectUID=c49354f8-46bb-45f2-bb24-b2f40b3fabbb kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:56.488695 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc\" objectUID=5a91233b-d9e4-4d0d-adb0-598047dfeb88 kind=\"ReplicationController\" virtual=false\nI0617 04:52:56.491466 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-ddddp, uid: 2b8a304a-080e-4c9d-852a-58ddb2c7aaf6] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:56.510491 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-gx4mb, uid: 9262c6a8-1a29-4e8c-8d7d-986bc30ef6ff] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.530827 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-ql44k, uid: 754fa9df-306c-4457-95b9-0d3c976f49b9] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.555826 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-p4qzj, uid: a0bb8ea6-da1d-4172-98f0-589c44aab8f5] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.581025 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-p7mqm, uid: f7a0c495-51f2-40fd-a0cd-22b2d8240f29] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.582461 10 namespace_controller.go:185] Namespace has been deleted endpointslice-5606\nE0617 04:52:56.591808 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:56.605289 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-prkbf, uid: 1b0dcaaf-9bc8-422e-b7f1-19af912edbf1] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.630298 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-4bhcc, uid: f86450e4-432c-4445-8b17-329d98675123] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.655368 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-dffcs, uid: 274b4b7b-519a-4017-ab26-e07b213adaa7] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.686291 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-75khr, uid: 130a0b9e-9d3e-43f4-a9e0-6d487780b988] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.705935 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-m4q6q, uid: d2602a39-9174-4ea3-b588-ababd6b12023] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.730409 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-774j9, uid: b9ce252d-47b9-40a4-8303-147ea7e5bff9] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.755785 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-vt76d, uid: c190f1ec-a64f-44bf-a127-7dc4b677a14f] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nE0617 04:52:56.771689 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:56.782436 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-c9fw9, uid: 012a06ae-ff27-488c-809f-b9bfc8f12b5e] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.806299 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-fmzbk, uid: d437840b-4a5e-45dd-b9ad-312ff636e989] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.835432 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-7smtf\" objectUID=c0fc83ef-782c-4c39-9f1b-166ce4dfbf5e kind=\"Pod\" propagationPolicy=Background\nI0617 04:52:56.855974 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-dkbnv, uid: 7878bcf7-8b7c-4dde-847f-fc8a3a1fc390] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.880672 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-tzh55, uid: 859ff11d-a9e4-4233-b2ad-a376886e21ae] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.906276 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-kghpm, uid: fe3df046-4b1f-4510-9965-7c6e7c73906d] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.930845 10 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-3539, name: simpletest.rc-qcj5c, uid: 14191a48-7fa3-441c-a8fb-cdb02b385d58] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0617 04:52:56.962787 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7vdcb\" objectUID=2eb58ccf-4c7e-4092-a443-877bd69018c4 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:56.983223 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-ql44k, uid: 754fa9df-306c-4457-95b9-0d3c976f49b9] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983248 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-p4qzj, uid: a0bb8ea6-da1d-4172-98f0-589c44aab8f5] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983258 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-p7mqm, uid: f7a0c495-51f2-40fd-a0cd-22b2d8240f29] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983266 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-gx4mb, uid: 9262c6a8-1a29-4e8c-8d7d-986bc30ef6ff] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983274 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-774j9, uid: b9ce252d-47b9-40a4-8303-147ea7e5bff9] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983282 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-vt76d, uid: c190f1ec-a64f-44bf-a127-7dc4b677a14f] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983290 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-c9fw9, uid: 012a06ae-ff27-488c-809f-b9bfc8f12b5e] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983298 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-prkbf, uid: 1b0dcaaf-9bc8-422e-b7f1-19af912edbf1] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983306 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-4bhcc, uid: f86450e4-432c-4445-8b17-329d98675123] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983315 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-dffcs, uid: 274b4b7b-519a-4017-ab26-e07b213adaa7] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983324 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-75khr, uid: 130a0b9e-9d3e-43f4-a9e0-6d487780b988] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983333 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-m4q6q, uid: d2602a39-9174-4ea3-b588-ababd6b12023] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983341 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-tzh55, uid: 859ff11d-a9e4-4233-b2ad-a376886e21ae] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983355 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-kghpm, uid: fe3df046-4b1f-4510-9965-7c6e7c73906d] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983363 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-qcj5c, uid: 14191a48-7fa3-441c-a8fb-cdb02b385d58] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983398 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-fmzbk, uid: d437840b-4a5e-45dd-b9ad-312ff636e989] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983408 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-7smtf, uid: c0fc83ef-782c-4c39-9f1b-166ce4dfbf5e] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983415 10 garbagecollector.go:595] adding [v1/Pod, namespace: gc-3539, name: simpletest.rc-dkbnv, uid: 7878bcf7-8b7c-4dde-847f-fc8a3a1fc390] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88] is deletingDependents\nI0617 04:52:56.983441 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7vdcb\" objectUID=6a2ca7c0-9c43-456c-bfaa-d375ba6de684 kind=\"Pod\" virtual=false\nI0617 04:52:57.009365 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-gx4mb, uid: 9262c6a8-1a29-4e8c-8d7d-986bc30ef6ff] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.010303 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zbwvb\" objectUID=6362a41e-6ae4-4acf-9c6a-e874d351c910 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.036259 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-ql44k, uid: 754fa9df-306c-4457-95b9-0d3c976f49b9] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.036993 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zbwvb\" objectUID=a37d9f25-5a2f-426f-9237-17b7b08b4c3e kind=\"Pod\" virtual=false\nI0617 04:52:57.058789 10 garbagecollector.go:468] \"Processing object\" object=\"job-3451/adopt-release-czmxt\" objectUID=609b76c5-5c1f-4c1c-ae27-5546a5fadfbf kind=\"Pod\" virtual=false\nI0617 04:52:57.058809 10 garbagecollector.go:468] \"Processing object\" object=\"job-3451/adopt-release-vvk7b\" objectUID=927967ab-6277-465c-9150-aa3d74d05434 kind=\"Pod\" virtual=false\nI0617 04:52:57.058821 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-czwb8\" objectUID=b2d54273-0beb-413a-b2c5-17b67d6c6b1d kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.059979 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-p4qzj, uid: a0bb8ea6-da1d-4172-98f0-589c44aab8f5] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.083539 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-p7mqm, uid: f7a0c495-51f2-40fd-a0cd-22b2d8240f29] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.085706 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-czwb8\" objectUID=5ebbdd2c-6727-4a09-be45-956af68a0f1e kind=\"Pod\" virtual=false\nI0617 04:52:57.109128 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-prkbf, uid: 1b0dcaaf-9bc8-422e-b7f1-19af912edbf1] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.109441 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hf785\" objectUID=9cb62797-bf52-4851-985d-8bb691aa7514 kind=\"CiliumEndpoint\" virtual=false\nE0617 04:52:57.132463 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:57.133758 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hf785\" objectUID=b4796dd0-1545-4fb4-bd44-c592ce29c6b7 kind=\"Pod\" virtual=false\nI0617 04:52:57.133952 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-4bhcc, uid: f86450e4-432c-4445-8b17-329d98675123] to the attemptToDelete, because it's waiting for its dependents to be deleted\nE0617 04:52:57.149015 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nI0617 04:52:57.157117 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-dffcs, uid: 274b4b7b-519a-4017-ab26-e07b213adaa7] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.157392 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jsm77\" objectUID=649e8cce-7952-471d-8326-e5ce06a71813 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.185257 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-75khr, uid: 130a0b9e-9d3e-43f4-a9e0-6d487780b988] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.185884 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jsm77\" objectUID=f696e6b6-4d18-4e97-b16f-c38fd097b9c6 kind=\"Pod\" virtual=false\nI0617 04:52:57.206844 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-m4q6q, uid: d2602a39-9174-4ea3-b588-ababd6b12023] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.209634 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-5vfhz\" objectUID=e44c6b82-368c-421d-b5d9-4159e9b0f155 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.237133 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-774j9, uid: b9ce252d-47b9-40a4-8303-147ea7e5bff9] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.239661 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-5vfhz\" objectUID=a3380613-733c-41d4-be7a-e90dbc50a7ce kind=\"Pod\" virtual=false\nI0617 04:52:57.243518 10 namespace_controller.go:185] Namespace has been deleted container-probe-5859\nI0617 04:52:57.260007 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-vt76d, uid: c190f1ec-a64f-44bf-a127-7dc4b677a14f] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.260768 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-54vmz\" objectUID=65a3a00f-8748-478c-91f4-9910ba9ef50c kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.282177 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-c9fw9, uid: 012a06ae-ff27-488c-809f-b9bfc8f12b5e] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.282427 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-54vmz\" objectUID=3f53418b-3cb1-4579-bceb-0fb37c989155 kind=\"Pod\" virtual=false\nI0617 04:52:57.306196 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-fmzbk, uid: d437840b-4a5e-45dd-b9ad-312ff636e989] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.308196 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hfzxt\" objectUID=e6a86310-51dc-407a-99a8-f2c965f86f50 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.338659 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hfzxt\" objectUID=794f3671-22fc-4367-85ae-ad31b9e552ac kind=\"Pod\" virtual=false\nI0617 04:52:57.359406 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-dkbnv, uid: 7878bcf7-8b7c-4dde-847f-fc8a3a1fc390] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.369502 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-57lgv\" objectUID=aa5c019e-d0a5-42fa-b7ef-843327d842c8 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.383136 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-tzh55, uid: 859ff11d-a9e4-4233-b2ad-a376886e21ae] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.383874 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-57lgv\" objectUID=b9f35bb0-ab9a-44f7-9f23-f2c4d3cc6d44 kind=\"Pod\" virtual=false\nI0617 04:52:57.407001 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ltpm5\" objectUID=513f73a2-2b7c-465b-a78d-ebbd18b51c28 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.407405 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-kghpm, uid: fe3df046-4b1f-4510-9965-7c6e7c73906d] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.432671 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ltpm5\" objectUID=015f1b66-589c-478f-9d27-338071996cbf kind=\"Pod\" virtual=false\nI0617 04:52:57.432734 10 graph_builder.go:587] add [v1/Pod, namespace: gc-3539, name: simpletest.rc-qcj5c, uid: 14191a48-7fa3-441c-a8fb-cdb02b385d58] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:52:57.480997 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-7vdcb, uid: 6a2ca7c0-9c43-456c-bfaa-d375ba6de684]\nI0617 04:52:57.530569 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-zbwvb, uid: a37d9f25-5a2f-426f-9237-17b7b08b4c3e]\nW0617 04:52:57.576980 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:57.577734 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:57.579938 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:57.581328 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-czwb8, uid: 5ebbdd2c-6727-4a09-be45-956af68a0f1e]\nI0617 04:52:57.639968 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-hf785, uid: b4796dd0-1545-4fb4-bd44-c592ce29c6b7]\nI0617 04:52:57.681073 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-jsm77, uid: f696e6b6-4d18-4e97-b16f-c38fd097b9c6]\nI0617 04:52:57.730400 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-5vfhz, uid: a3380613-733c-41d4-be7a-e90dbc50a7ce]\nI0617 04:52:57.780950 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-54vmz, uid: 3f53418b-3cb1-4579-bceb-0fb37c989155]\nI0617 04:52:57.831582 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-hfzxt, uid: 794f3671-22fc-4367-85ae-ad31b9e552ac]\nI0617 04:52:57.855805 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4rhl5\" objectUID=0197088b-128d-4421-be6c-90080155697d kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:57.880174 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-57lgv, uid: b9f35bb0-ab9a-44f7-9f23-f2c4d3cc6d44]\nI0617 04:52:57.931142 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-ltpm5, uid: 015f1b66-589c-478f-9d27-338071996cbf]\nI0617 04:52:57.955459 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-7vdcb\" objectUID=2eb58ccf-4c7e-4092-a443-877bd69018c4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.005841 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-zbwvb\" objectUID=6362a41e-6ae4-4acf-9c6a-e874d351c910 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.056841 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-czwb8\" objectUID=b2d54273-0beb-413a-b2c5-17b67d6c6b1d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.105450 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-hf785\" objectUID=9cb62797-bf52-4851-985d-8bb691aa7514 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.155908 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-jsm77\" objectUID=649e8cce-7952-471d-8326-e5ce06a71813 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.205780 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-5vfhz\" objectUID=e44c6b82-368c-421d-b5d9-4159e9b0f155 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.255914 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-54vmz\" objectUID=65a3a00f-8748-478c-91f4-9910ba9ef50c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.307281 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-hfzxt\" objectUID=e6a86310-51dc-407a-99a8-f2c965f86f50 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0617 04:52:58.333759 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:58.405832 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-ltpm5\" objectUID=513f73a2-2b7c-465b-a78d-ebbd18b51c28 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.457698 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4rhl5\" objectUID=a37738b7-d462-4715-9991-2c6385700bd3 kind=\"Pod\" virtual=false\nI0617 04:52:58.483259 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-msjgk\" objectUID=07b2d0af-62e6-49a0-a535-d9109f2b6ed0 kind=\"CiliumEndpoint\" virtual=false\nE0617 04:52:58.506069 10 garbagecollector.go:347] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-zbwvb\", UID:\"6362a41e-6ae4-4acf-9c6a-e874d351c910\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-3539\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-zbwvb\", UID:\"a37d9f25-5a2f-426f-9237-17b7b08b4c3e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: ciliumendpoints.cilium.io \"simpletest.rc-zbwvb\" not found\nI0617 04:52:58.506250 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-msjgk\" objectUID=bb523d2f-87c6-4a82-a58a-5dd7c0ce87bf kind=\"Pod\" virtual=false\nI0617 04:52:58.535512 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rlbb6\" objectUID=8fc88c3f-79fd-495c-818c-b8b8af3fc2bb kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.557781 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rlbb6\" objectUID=d8087078-d495-4979-8f9f-f56e6b1c64c0 kind=\"Pod\" virtual=false\nI0617 04:52:58.585763 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-j55bv\" objectUID=e12fc9cd-2327-41ce-b9db-8a7bed0ea7e1 kind=\"CiliumEndpoint\" virtual=false\nE0617 04:52:58.605984 10 garbagecollector.go:347] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-hf785\", UID:\"9cb62797-bf52-4851-985d-8bb691aa7514\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-3539\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-hf785\", UID:\"b4796dd0-1545-4fb4-bd44-c592ce29c6b7\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: ciliumendpoints.cilium.io \"simpletest.rc-hf785\" not found\nI0617 04:52:58.606193 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-j55bv\" objectUID=e87c48b3-82a4-4540-87e9-ea7af0d641ac kind=\"Pod\" virtual=false\nI0617 04:52:58.634037 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-h89sb\" objectUID=bb1e3b51-2384-43de-a034-300285797f15 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.658515 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-h89sb\" objectUID=fa012247-272e-43df-98e6-327cb1681f1a kind=\"Pod\" virtual=false\nI0617 04:52:58.684682 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7qvtc\" objectUID=dbadb40d-4f9a-4996-89df-fb76f854b0f4 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.707681 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7qvtc\" objectUID=ad241a3a-3059-49fc-808f-15261524080b kind=\"Pod\" virtual=false\nI0617 04:52:58.734533 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-f4dtw\" objectUID=df0d3bfe-e249-431f-bcd3-b44d2da35639 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.758825 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-blzsm\" objectUID=5eba5e91-dd0f-4ee8-8646-a83d5561afc1 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.784802 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-blzsm\" objectUID=f7f6162d-91e3-4904-b50b-e8e906219dd5 kind=\"Pod\" virtual=false\nI0617 04:52:58.807497 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-wsv8r\" objectUID=99edd4a0-bb64-44ea-bb57-cb474be156b5 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.841669 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-wsv8r\" objectUID=a6b102de-d5b8-44d5-ac6f-564d9e833867 kind=\"Pod\" virtual=false\nI0617 04:52:58.856234 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-4rhl5\" objectUID=0197088b-128d-4421-be6c-90080155697d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:58.883850 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tsjh2\" objectUID=498bec4a-cfdc-4e22-a223-942ab23df9f0 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.907713 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tsjh2\" objectUID=1c3c74e7-93c6-4854-8f71-cca4ee3d21b1 kind=\"Pod\" virtual=false\nI0617 04:52:58.935207 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zk4rg\" objectUID=3519e80a-6256-4fcb-a616-f8654a418fd7 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:58.956700 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-4rhl5, uid: a37738b7-d462-4715-9991-2c6385700bd3]\nI0617 04:52:58.980372 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zk4rg\" objectUID=befab5aa-d1bc-4ad8-9226-07cc01ef82a7 kind=\"Pod\" virtual=false\nI0617 04:52:59.005910 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-msjgk, uid: bb523d2f-87c6-4a82-a58a-5dd7c0ce87bf]\nI0617 04:52:59.030392 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-g8rjq\" objectUID=8fcd5aa0-361e-4303-96f3-1648120fe6ff kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:59.055828 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-rlbb6, uid: d8087078-d495-4979-8f9f-f56e6b1c64c0]\nI0617 04:52:59.106820 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-j55bv, uid: e87c48b3-82a4-4540-87e9-ea7af0d641ac]\nI0617 04:52:59.156196 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-h89sb, uid: fa012247-272e-43df-98e6-327cb1681f1a]\nI0617 04:52:59.171622 10 namespace_controller.go:185] Namespace has been deleted podtemplate-9707\nI0617 04:52:59.205665 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-7qvtc, uid: ad241a3a-3059-49fc-808f-15261524080b]\nW0617 04:52:59.226988 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:52:59.227011 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:52:59.234624 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-f4dtw\" objectUID=df0d3bfe-e249-431f-bcd3-b44d2da35639 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:59.256860 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-g8rjq\" objectUID=533370a9-6d2b-4ad6-8e84-281e7eb8f4bb kind=\"Pod\" virtual=false\nI0617 04:52:59.283684 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-blzsm, uid: f7f6162d-91e3-4904-b50b-e8e906219dd5]\nI0617 04:52:59.309144 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-mxvmr\" objectUID=b03c7575-20b6-4a35-9e30-468cff90b8f4 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:59.330839 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-wsv8r, uid: a6b102de-d5b8-44d5-ac6f-564d9e833867]\nI0617 04:52:59.357920 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-mxvmr\" objectUID=365017ba-fa08-4ba2-8447-330c71da75e6 kind=\"Pod\" virtual=false\nI0617 04:52:59.406219 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-tsjh2, uid: 1c3c74e7-93c6-4854-8f71-cca4ee3d21b1]\nI0617 04:52:59.481759 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-zk4rg, uid: befab5aa-d1bc-4ad8-9226-07cc01ef82a7]\nI0617 04:52:59.583998 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-j55bv\" objectUID=e12fc9cd-2327-41ce-b9db-8a7bed0ea7e1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:59.630984 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-h89sb\" objectUID=bb1e3b51-2384-43de-a034-300285797f15 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:59.681339 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-7qvtc\" objectUID=dbadb40d-4f9a-4996-89df-fb76f854b0f4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0617 04:52:59.714244 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:52:59.734643 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jfsmd\" objectUID=6119c7e7-db45-4024-a3b7-55cf2463ba1f kind=\"CiliumEndpoint\" virtual=false\nI0617 04:52:59.757114 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-g8rjq, uid: 533370a9-6d2b-4ad6-8e84-281e7eb8f4bb]\nI0617 04:52:59.805649 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jfsmd\" objectUID=119666a8-1157-49a3-bd42-6312c6674f92 kind=\"Pod\" virtual=false\nI0617 04:52:59.855807 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-mxvmr, uid: 365017ba-fa08-4ba2-8447-330c71da75e6]\nI0617 04:52:59.880705 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-tsjh2\" objectUID=498bec4a-cfdc-4e22-a223-942ab23df9f0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:59.930531 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-zk4rg\" objectUID=3519e80a-6256-4fcb-a616-f8654a418fd7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:52:59.960166 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fxqqc\" objectUID=59084e7e-d962-4fb8-b04b-25e8dcdda7b1 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.009157 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fxqqc\" objectUID=77fa0d6b-0945-4d28-9be9-c7397873311b kind=\"Pod\" virtual=false\nI0617 04:53:00.030840 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-g8rjq\" objectUID=8fcd5aa0-361e-4303-96f3-1648120fe6ff kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:00.057780 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-87wcw\" objectUID=e12db9b0-94a2-42b9-8b93-df09f4bd4fd9 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.082187 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-87wcw\" objectUID=ddb12427-8ea7-46f0-bb6c-55636e98f16d kind=\"Pod\" virtual=false\nI0617 04:53:00.109531 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jk7q9\" objectUID=46d65d49-6afd-4525-9c92-b5f995e3c244 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.133035 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jk7q9\" objectUID=9ce6c704-acc2-459b-819d-5f6fa58f5ab2 kind=\"Pod\" virtual=false\nI0617 04:53:00.164703 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ztj66\" objectUID=698f4844-827e-4578-aed5-f00873dac236 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.184601 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ztj66\" objectUID=1fba4976-b789-4bc4-aede-90de1c357ee6 kind=\"Pod\" virtual=false\nI0617 04:53:00.208047 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rlg8v\" objectUID=7d5c5722-a7e5-4712-9d87-eeae29f2c058 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.291778 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rlg8v\" objectUID=ace6e31b-61d8-4980-b2c4-b6bd56bbbb87 kind=\"Pod\" virtual=false\nI0617 04:53:00.305363 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-jfsmd, uid: 119666a8-1157-49a3-bd42-6312c6674f92]\nI0617 04:53:00.339377 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fl7qp\" objectUID=4fa4647c-d318-4259-a04e-1e1d09cc2a5c kind=\"CiliumEndpoint\" virtual=false\nE0617 04:53:00.386753 10 garbagecollector.go:347] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-tsjh2\", UID:\"498bec4a-cfdc-4e22-a223-942ab23df9f0\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-3539\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-tsjh2\", UID:\"1c3c74e7-93c6-4854-8f71-cca4ee3d21b1\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: ciliumendpoints.cilium.io \"simpletest.rc-tsjh2\" not found\nI0617 04:53:00.386791 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fl7qp\" objectUID=ee9a50ef-38f3-4fd6-bf66-bacfe166c582 kind=\"Pod\" virtual=false\nI0617 04:53:00.418011 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-bmqhp\" objectUID=30b34525-7cfc-45be-ae9c-1c5adc4c440e kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.445433 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-bmqhp\" objectUID=f89f6dd9-08f5-44da-a789-2569635a88d9 kind=\"Pod\" virtual=false\nI0617 04:53:00.493570 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-2zxpv\" objectUID=b47fada6-9519-467a-b69f-49b239d1110a kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.506186 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-fxqqc, uid: 77fa0d6b-0945-4d28-9be9-c7397873311b]\nI0617 04:53:00.521943 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:00.533140 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-2zxpv\" objectUID=543eb332-ccd4-45c8-9dad-bf5ffcf50e4b kind=\"Pod\" virtual=false\nI0617 04:53:00.580812 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-87wcw, uid: ddb12427-8ea7-46f0-bb6c-55636e98f16d]\nI0617 04:53:00.630585 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-jk7q9, uid: 9ce6c704-acc2-459b-819d-5f6fa58f5ab2]\nI0617 04:53:00.656009 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dnbc9\" objectUID=f9d8d737-aba0-4b11-8d21-bbc0ec3e5f93 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.680467 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-ztj66, uid: 1fba4976-b789-4bc4-aede-90de1c357ee6]\nI0617 04:53:00.705305 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dnbc9\" objectUID=937b292b-2519-41d6-a1c4-e8800547b5b7 kind=\"Pod\" virtual=false\nI0617 04:53:00.735009 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-jfsmd\" objectUID=6119c7e7-db45-4024-a3b7-55cf2463ba1f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:00.770069 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-474qc\" objectUID=bcb0b662-7789-4060-bf3f-3bf587c6998b kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.780551 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-rlg8v, uid: ace6e31b-61d8-4980-b2c4-b6bd56bbbb87]\nI0617 04:53:00.830590 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-474qc\" objectUID=605dcc17-ed15-44af-bb2f-7f8a3bc6942e kind=\"Pod\" virtual=false\nI0617 04:53:00.858833 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hxxff\" objectUID=cf4b935b-7306-4224-b322-cc312d266d69 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:00.880307 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-fl7qp, uid: ee9a50ef-38f3-4fd6-bf66-bacfe166c582]\nI0617 04:53:00.907955 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hxxff\" objectUID=7c3ba075-8e00-4595-8bd2-99f719b6afff kind=\"Pod\" virtual=false\nI0617 04:53:00.931314 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-bmqhp, uid: f89f6dd9-08f5-44da-a789-2569635a88d9]\nI0617 04:53:00.955923 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-fxqqc\" objectUID=59084e7e-d962-4fb8-b04b-25e8dcdda7b1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:00.983866 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4l8mm\" objectUID=1b0837f4-a9bb-41a4-834b-5764159c9740 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:01.031073 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-2zxpv, uid: 543eb332-ccd4-45c8-9dad-bf5ffcf50e4b]\nI0617 04:53:01.055655 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-87wcw\" objectUID=e12db9b0-94a2-42b9-8b93-df09f4bd4fd9 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:01.105681 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-jk7q9\" objectUID=46d65d49-6afd-4525-9c92-b5f995e3c244 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:01.157360 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4l8mm\" objectUID=f5c5c04d-e606-4f41-8cf2-c95c31ef0d43 kind=\"Pod\" virtual=false\nI0617 04:53:01.205336 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-dnbc9, uid: 937b292b-2519-41d6-a1c4-e8800547b5b7]\nI0617 04:53:01.239624 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4ptw8\" objectUID=21a34c42-9d6f-4532-9278-c74d6b63e48c kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:01.255429 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4ptw8\" objectUID=d0045bbf-3ac3-4d81-9550-b7e0bc7edacf kind=\"Pod\" virtual=false\nI0617 04:53:01.310513 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-566qf\" objectUID=23fbddcc-4dc5-4de7-99eb-8ddbb53a9cce kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:01.330110 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-474qc, uid: 605dcc17-ed15-44af-bb2f-7f8a3bc6942e]\nI0617 04:53:01.365121 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-566qf\" objectUID=2d9a676e-ddf2-455d-84eb-1784833dd6d0 kind=\"Pod\" virtual=false\nI0617 04:53:01.400645 10 namespace_controller.go:185] Namespace has been deleted pods-3645\nI0617 04:53:01.406927 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-hxxff, uid: 7c3ba075-8e00-4595-8bd2-99f719b6afff]\nI0617 04:53:01.458604 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-899qg\" objectUID=40373778-e638-4367-920d-8c8b75f97d71 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:01.480461 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ddddp\" objectUID=a4efb087-6b89-45e0-adbf-f89922b61897 kind=\"CiliumEndpoint\" virtual=false\nE0617 04:53:01.555631 10 garbagecollector.go:347] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-87wcw\", UID:\"e12db9b0-94a2-42b9-8b93-df09f4bd4fd9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-3539\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-87wcw\", UID:\"ddb12427-8ea7-46f0-bb6c-55636e98f16d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: ciliumendpoints.cilium.io \"simpletest.rc-87wcw\" not found\nI0617 04:53:01.555682 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ddddp\" objectUID=2b8a304a-080e-4c9d-852a-58ddb2c7aaf6 kind=\"Pod\" virtual=false\nI0617 04:53:01.584279 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc\" objectUID=5a91233b-d9e4-4d0d-adb0-598047dfeb88 kind=\"ReplicationController\" virtual=false\nE0617 04:53:01.606262 10 garbagecollector.go:347] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-jk7q9\", UID:\"46d65d49-6afd-4525-9c92-b5f995e3c244\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-3539\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-jk7q9\", UID:\"9ce6c704-acc2-459b-819d-5f6fa58f5ab2\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: ciliumendpoints.cilium.io \"simpletest.rc-jk7q9\" not found\nI0617 04:53:01.606300 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-gx4mb\" objectUID=2576d596-07f4-428d-a29f-20700b2296a9 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:01.635348 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-gx4mb\" objectUID=9262c6a8-1a29-4e8c-8d7d-986bc30ef6ff kind=\"Pod\" virtual=false\nI0617 04:53:01.656176 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-4l8mm, uid: f5c5c04d-e606-4f41-8cf2-c95c31ef0d43]\nI0617 04:53:01.683595 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ql44k\" objectUID=4d0ae322-0963-489e-8cb3-c991c8d24916 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:01.731060 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ql44k\" objectUID=754fa9df-306c-4457-95b9-0d3c976f49b9 kind=\"Pod\" virtual=false\nI0617 04:53:01.755720 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-4ptw8, uid: d0045bbf-3ac3-4d81-9550-b7e0bc7edacf]\nI0617 04:53:01.784329 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p4qzj\" objectUID=a0bb8ea6-da1d-4172-98f0-589c44aab8f5 kind=\"Pod\" virtual=false\nI0617 04:53:01.805861 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p4qzj\" objectUID=35135495-1c43-4cde-aaa8-caa936d1f58f kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:01.855512 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-566qf, uid: 2d9a676e-ddf2-455d-84eb-1784833dd6d0]\nI0617 04:53:02.055678 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-ddddp, uid: 2b8a304a-080e-4c9d-852a-58ddb2c7aaf6]\nI0617 04:53:02.131374 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-gx4mb, uid: 9262c6a8-1a29-4e8c-8d7d-986bc30ef6ff]\nI0617 04:53:02.230783 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-ql44k, uid: 754fa9df-306c-4457-95b9-0d3c976f49b9]\nI0617 04:53:02.285607 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-p4qzj, uid: a0bb8ea6-da1d-4172-98f0-589c44aab8f5]\nE0617 04:53:02.361472 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:53:02.506383 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-ql44k\" objectUID=4d0ae322-0963-489e-8cb3-c991c8d24916 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:02.734761 10 garbagecollector.go:210] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [mygroup.example.com/v1, Resource=foorz59fas]\nI0617 04:53:02.734838 10 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0617 04:53:02.734878 10 shared_informer.go:247] Caches are synced for garbage collector \nI0617 04:53:02.734884 10 garbagecollector.go:251] synced garbage collector\nI0617 04:53:02.735322 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dffcs\" objectUID=274b4b7b-519a-4017-ab26-e07b213adaa7 kind=\"Pod\" virtual=false\nI0617 04:53:02.735485 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p7mqm\" objectUID=8cd431fd-a76f-4b8f-a030-96906a54422c kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735500 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p7mqm\" objectUID=f7a0c495-51f2-40fd-a0cd-22b2d8240f29 kind=\"Pod\" virtual=false\nI0617 04:53:02.735513 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-prkbf\" objectUID=dc293cb6-a6be-43be-a68a-a51df4ca2b32 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735523 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-prkbf\" objectUID=1b0dcaaf-9bc8-422e-b7f1-19af912edbf1 kind=\"Pod\" virtual=false\nI0617 04:53:02.735536 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4bhcc\" objectUID=f86450e4-432c-4445-8b17-329d98675123 kind=\"Pod\" virtual=false\nI0617 04:53:02.735546 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4bhcc\" objectUID=13d61974-613d-42bf-ad3b-8081f2c3b21d kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735557 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dffcs\" objectUID=704d23fc-ca9b-409e-9008-1876d81e2114 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735572 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-774j9\" objectUID=b9ce252d-47b9-40a4-8303-147ea7e5bff9 kind=\"Pod\" virtual=false\nI0617 04:53:02.735582 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-75khr\" objectUID=c543c182-da2a-47ea-94aa-1f6f064e64bb kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735592 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-75khr\" objectUID=130a0b9e-9d3e-43f4-a9e0-6d487780b988 kind=\"Pod\" virtual=false\nI0617 04:53:02.735603 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-m4q6q\" objectUID=b0a496e2-ac40-4162-8e84-28ab8573d2ad kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735614 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-m4q6q\" objectUID=d2602a39-9174-4ea3-b588-ababd6b12023 kind=\"Pod\" virtual=false\nI0617 04:53:02.735624 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-774j9\" objectUID=97d14d4b-20cf-4153-9df2-fbc88b482973 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735638 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-c9fw9\" objectUID=c03d2075-e80d-4a58-860f-5c7328100ed0 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735648 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-vt76d\" objectUID=48d686bd-0e9c-4caa-91b7-81a14939e8b0 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735659 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-vt76d\" objectUID=c190f1ec-a64f-44bf-a127-7dc4b677a14f kind=\"Pod\" virtual=false\nI0617 04:53:02.735671 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-c9fw9\" objectUID=012a06ae-ff27-488c-809f-b9bfc8f12b5e kind=\"Pod\" virtual=false\nI0617 04:53:02.735685 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fmzbk\" objectUID=c2471199-374a-4efe-a5a7-831ce46d7287 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.735700 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fmzbk\" objectUID=d437840b-4a5e-45dd-b9ad-312ff636e989 kind=\"Pod\" virtual=false\nI0617 04:53:02.755698 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-dffcs, uid: 274b4b7b-519a-4017-ab26-e07b213adaa7]\nI0617 04:53:02.805706 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-p7mqm, uid: f7a0c495-51f2-40fd-a0cd-22b2d8240f29]\nI0617 04:53:02.831264 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7smtf\" objectUID=c0fc83ef-782c-4c39-9f1b-166ce4dfbf5e kind=\"Pod\" virtual=false\nI0617 04:53:02.831286 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dkbnv\" objectUID=982e9bcf-3912-485d-89ff-f67a05adfc4b kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:02.857352 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-prkbf, uid: 1b0dcaaf-9bc8-422e-b7f1-19af912edbf1]\nI0617 04:53:02.880178 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-4bhcc, uid: f86450e4-432c-4445-8b17-329d98675123]\nI0617 04:53:02.931537 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-dkbnv\" objectUID=7878bcf7-8b7c-4dde-847f-fc8a3a1fc390 kind=\"Pod\" virtual=false\nI0617 04:53:02.958151 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-774j9, uid: b9ce252d-47b9-40a4-8303-147ea7e5bff9]\nI0617 04:53:02.980429 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tzh55\" objectUID=859ff11d-a9e4-4233-b2ad-a376886e21ae kind=\"Pod\" virtual=false\nI0617 04:53:03.006162 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-75khr, uid: 130a0b9e-9d3e-43f4-a9e0-6d487780b988]\nI0617 04:53:03.030676 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tzh55\" objectUID=d9b12183-abc0-49ea-b7ad-539b57823bc8 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:03.055945 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-m4q6q, uid: d2602a39-9174-4ea3-b588-ababd6b12023]\nI0617 04:53:03.080813 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-kghpm\" objectUID=fe3df046-4b1f-4510-9965-7c6e7c73906d kind=\"Pod\" virtual=false\nI0617 04:53:03.105605 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-kghpm\" objectUID=884a1b76-4317-4240-9d67-14cd717260b5 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:03.130393 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-qcj5c\" objectUID=14191a48-7fa3-441c-a8fb-cdb02b385d58 kind=\"Pod\" virtual=false\nI0617 04:53:03.157417 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-vt76d, uid: c190f1ec-a64f-44bf-a127-7dc4b677a14f]\nI0617 04:53:03.180781 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-c9fw9, uid: 012a06ae-ff27-488c-809f-b9bfc8f12b5e]\nI0617 04:53:03.205104 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-qcj5c\" objectUID=81d2d552-95ae-4703-bdb0-7364acaa3968 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:03.231267 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-fmzbk, uid: d437840b-4a5e-45dd-b9ad-312ff636e989]\nI0617 04:53:03.281162 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-p7mqm\" objectUID=8cd431fd-a76f-4b8f-a030-96906a54422c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:03.330733 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7vdcb\" objectUID=6a2ca7c0-9c43-456c-bfaa-d375ba6de684 kind=\"Pod\" virtual=false\nI0617 04:53:03.406072 10 garbagecollector.go:580] \"Deleting object\" object=\"gc-3539/simpletest.rc-4bhcc\" objectUID=13d61974-613d-42bf-ad3b-8081f2c3b21d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:03.431229 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-dkbnv, uid: 7878bcf7-8b7c-4dde-847f-fc8a3a1fc390]\nI0617 04:53:03.485363 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-tzh55, uid: 859ff11d-a9e4-4233-b2ad-a376886e21ae]\nI0617 04:53:03.530853 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zbwvb\" objectUID=6362a41e-6ae4-4acf-9c6a-e874d351c910 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:03.580285 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-kghpm, uid: fe3df046-4b1f-4510-9965-7c6e7c73906d]\nI0617 04:53:03.606120 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zbwvb\" objectUID=a37d9f25-5a2f-426f-9237-17b7b08b4c3e kind=\"Pod\" virtual=false\nI0617 04:53:03.630794 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-3539, name: simpletest.rc-qcj5c, uid: 14191a48-7fa3-441c-a8fb-cdb02b385d58]\nI0617 04:53:03.706183 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-czwb8\" objectUID=5ebbdd2c-6727-4a09-be45-956af68a0f1e kind=\"Pod\" virtual=false\nI0617 04:53:03.763256 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hf785\" objectUID=9cb62797-bf52-4851-985d-8bb691aa7514 kind=\"CiliumEndpoint\" virtual=false\nE0617 04:53:03.780759 10 garbagecollector.go:347] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-p7mqm\", UID:\"8cd431fd-a76f-4b8f-a030-96906a54422c\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-3539\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-p7mqm\", UID:\"f7a0c495-51f2-40fd-a0cd-22b2d8240f29\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: ciliumendpoints.cilium.io \"simpletest.rc-p7mqm\" not found\nI0617 04:53:03.780823 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hf785\" objectUID=b4796dd0-1545-4fb4-bd44-c592ce29c6b7 kind=\"Pod\" virtual=false\nI0617 04:53:03.810494 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jsm77\" objectUID=f696e6b6-4d18-4e97-b16f-c38fd097b9c6 kind=\"Pod\" virtual=false\nI0617 04:53:03.837673 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-5vfhz\" objectUID=a3380613-733c-41d4-be7a-e90dbc50a7ce kind=\"Pod\" virtual=false\nI0617 04:53:03.858235 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-54vmz\" objectUID=3f53418b-3cb1-4579-bceb-0fb37c989155 kind=\"Pod\" virtual=false\nI0617 04:53:03.883788 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-hfzxt\" objectUID=794f3671-22fc-4367-85ae-ad31b9e552ac kind=\"Pod\" virtual=false\nI0617 04:53:03.908080 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ltpm5\" objectUID=015f1b66-589c-478f-9d27-338071996cbf kind=\"Pod\" virtual=false\nI0617 04:53:03.959869 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4rhl5\" objectUID=a37738b7-d462-4715-9991-2c6385700bd3 kind=\"Pod\" virtual=false\nI0617 04:53:04.009714 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-j55bv\" objectUID=e87c48b3-82a4-4540-87e9-ea7af0d641ac kind=\"Pod\" virtual=false\nI0617 04:53:04.030897 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-h89sb\" objectUID=fa012247-272e-43df-98e6-327cb1681f1a kind=\"Pod\" virtual=false\nI0617 04:53:04.061949 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-7qvtc\" objectUID=ad241a3a-3059-49fc-808f-15261524080b kind=\"Pod\" virtual=false\nI0617 04:53:04.105838 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tsjh2\" objectUID=498bec4a-cfdc-4e22-a223-942ab23df9f0 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:04.159142 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-tsjh2\" objectUID=1c3c74e7-93c6-4854-8f71-cca4ee3d21b1 kind=\"Pod\" virtual=false\nI0617 04:53:04.188562 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-zk4rg\" objectUID=befab5aa-d1bc-4ad8-9226-07cc01ef82a7 kind=\"Pod\" virtual=false\nI0617 04:53:04.206323 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-g8rjq\" objectUID=533370a9-6d2b-4ad6-8e84-281e7eb8f4bb kind=\"Pod\" virtual=false\nI0617 04:53:04.234366 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jfsmd\" objectUID=119666a8-1157-49a3-bd42-6312c6674f92 kind=\"Pod\" virtual=false\nI0617 04:53:04.256120 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-87wcw\" objectUID=e12db9b0-94a2-42b9-8b93-df09f4bd4fd9 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:04.283731 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-87wcw\" objectUID=ddb12427-8ea7-46f0-bb6c-55636e98f16d kind=\"Pod\" virtual=false\nI0617 04:53:04.306082 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jk7q9\" objectUID=46d65d49-6afd-4525-9c92-b5f995e3c244 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:04.331165 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-jk7q9\" objectUID=9ce6c704-acc2-459b-819d-5f6fa58f5ab2 kind=\"Pod\" virtual=false\nI0617 04:53:04.355612 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-rlg8v\" objectUID=ace6e31b-61d8-4980-b2c4-b6bd56bbbb87 kind=\"Pod\" virtual=false\nI0617 04:53:04.380390 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc\" objectUID=5a91233b-d9e4-4d0d-adb0-598047dfeb88 kind=\"ReplicationController\" virtual=false\nI0617 04:53:04.405790 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-fxqqc\" objectUID=77fa0d6b-0945-4d28-9be9-c7397873311b kind=\"Pod\" virtual=false\nI0617 04:53:04.434613 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-ql44k\" objectUID=754fa9df-306c-4457-95b9-0d3c976f49b9 kind=\"Pod\" virtual=false\nI0617 04:53:04.456102 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p7mqm\" objectUID=8cd431fd-a76f-4b8f-a030-96906a54422c kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:04.485282 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-p7mqm\" objectUID=f7a0c495-51f2-40fd-a0cd-22b2d8240f29 kind=\"Pod\" virtual=false\nI0617 04:53:04.505414 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc-4bhcc\" objectUID=13d61974-613d-42bf-ad3b-8081f2c3b21d kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:04.880183 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/ReplicationController, namespace: gc-3539, name: simpletest.rc, uid: 5a91233b-d9e4-4d0d-adb0-598047dfeb88]\nI0617 04:53:05.057930 10 garbagecollector.go:468] \"Processing object\" object=\"gc-3539/simpletest.rc\" objectUID=5a91233b-d9e4-4d0d-adb0-598047dfeb88 kind=\"ReplicationController\" virtual=false\nI0617 04:53:06.591257 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-9369/pvc-c27kn\"\nI0617 04:53:06.597122 10 pv_controller.go:648] volume \"local-jr467\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:53:06.599854 10 pv_controller.go:887] volume \"local-jr467\" entered phase \"Released\"\nI0617 04:53:06.700506 10 pv_controller_base.go:533] deletion of claim \"provisioning-9369/pvc-c27kn\" was already processed\nW0617 04:53:06.798168 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:06.798192 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:06.830382 10 tokens_controller.go:262] error synchronizing serviceaccount projected-2273/default: secrets \"default-token-dhbzl\" is forbidden: unable to create new content in namespace projected-2273 because it is being terminated\nI0617 04:53:06.838529 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8776/pvc-7rq55\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8776\\\" or manually created by system administrator\"\nI0617 04:53:06.842651 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8776/pvc-7rq55\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8776\\\" or manually created by system administrator\"\nI0617 04:53:06.852434 10 pv_controller.go:887] volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" entered phase \"Bound\"\nI0617 04:53:06.852559 10 pv_controller.go:990] volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" bound to claim \"csi-mock-volumes-8776/pvc-7rq55\"\nI0617 04:53:06.859888 10 pv_controller.go:831] claim \"csi-mock-volumes-8776/pvc-7rq55\" entered phase \"Bound\"\nI0617 04:53:07.144740 10 expand_controller.go:292] Ignoring the PVC \"volume-expand-3465/csi-hostpathnzdlw\" (uid: \"52358cb8-f735-4dc3-99f8-49ce51d343da\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0617 04:53:07.144810 10 event.go:294] \"Event occurred\" object=\"volume-expand-3465/csi-hostpathnzdlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0617 04:53:07.274762 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8776^4\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nE0617 04:53:07.596214 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nI0617 04:53:07.793781 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8776^4\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:53:07.793998 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8776/pvc-volume-tester-8rwb7\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\\\" \"\nE0617 04:53:08.551163 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-7937/inline-volume-j99b6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:53:08.551404 10 event.go:294] \"Event occurred\" object=\"ephemeral-7937/inline-volume-j99b6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:53:08.866511 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-7937, name: inline-volume-j99b6, uid: 6b69536b-7f10-4c4b-8f2f-8cdfa42da8b2] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:53:08.866982 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-j99b6-my-volume\" objectUID=dba6ae37-77d1-49dc-ac3e-bc879275f913 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:08.867281 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-j99b6\" objectUID=6b69536b-7f10-4c4b-8f2f-8cdfa42da8b2 kind=\"Pod\" virtual=false\nI0617 04:53:08.871972 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7937, name: inline-volume-j99b6-my-volume, uid: dba6ae37-77d1-49dc-ac3e-bc879275f913] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7937, name: inline-volume-j99b6, uid: 6b69536b-7f10-4c4b-8f2f-8cdfa42da8b2] is deletingDependents\nI0617 04:53:08.873582 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7937/inline-volume-j99b6-my-volume\" objectUID=dba6ae37-77d1-49dc-ac3e-bc879275f913 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:53:08.876992 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-j99b6-my-volume\" objectUID=dba6ae37-77d1-49dc-ac3e-bc879275f913 kind=\"PersistentVolumeClaim\" virtual=false\nE0617 04:53:08.879283 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-7937/inline-volume-j99b6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:53:08.879857 10 event.go:294] \"Event occurred\" object=\"ephemeral-7937/inline-volume-j99b6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:53:08.883763 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-7937/inline-volume-j99b6-my-volume\"\nI0617 04:53:08.890625 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7937/inline-volume-j99b6-my-volume\" objectUID=dba6ae37-77d1-49dc-ac3e-bc879275f913 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:53:08.894273 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-j99b6\" objectUID=6b69536b-7f10-4c4b-8f2f-8cdfa42da8b2 kind=\"Pod\" virtual=false\nE0617 04:53:08.894962 10 garbagecollector.go:347] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"PersistentVolumeClaim\", Name:\"inline-volume-j99b6-my-volume\", UID:\"dba6ae37-77d1-49dc-ac3e-bc879275f913\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"ephemeral-7937\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"inline-volume-j99b6\", UID:\"6b69536b-7f10-4c4b-8f2f-8cdfa42da8b2\", Controller:(*bool)(0xc0024a80b6), BlockOwnerDeletion:(*bool)(0xc0024a80b7)}}}: persistentvolumeclaims \"inline-volume-j99b6-my-volume\" not found\nI0617 04:53:08.896203 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7937, name: inline-volume-j99b6, uid: 6b69536b-7f10-4c4b-8f2f-8cdfa42da8b2]\nI0617 04:53:08.901098 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-j99b6-my-volume\" objectUID=dba6ae37-77d1-49dc-ac3e-bc879275f913 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:09.209518 10 event.go:294] \"Event occurred\" object=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-zp5h9 to be scheduled\"\nI0617 04:53:10.445681 10 namespace_controller.go:185] Namespace has been deleted container-probe-7764\nI0617 04:53:10.688343 10 event.go:294] \"Event occurred\" object=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:10.688369 10 event.go:294] \"Event occurred\" object=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:11.509906 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-7298/inline-volume-tester-vkcnm\" PVC=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\"\nI0617 04:53:11.509927 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\"\nI0617 04:53:11.524480 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\"\nI0617 04:53:11.534993 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298/inline-volume-tester-vkcnm\" objectUID=772410f9-a0f3-4b3e-bbad-221994b77177 kind=\"Pod\" virtual=false\nI0617 04:53:11.535586 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1947/pod-96e1bd14-1d88-476d-842a-5ef0f79e8862\" PVC=\"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:53:11.536810 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:53:11.537467 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7298, name: inline-volume-tester-vkcnm, uid: 772410f9-a0f3-4b3e-bbad-221994b77177]\nI0617 04:53:11.537618 10 pv_controller.go:648] volume \"pvc-c7c7b694-46a9-42cc-9ac0-45b03f47056d\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:53:11.545302 10 pv_controller.go:887] volume \"pvc-c7c7b694-46a9-42cc-9ac0-45b03f47056d\" entered phase \"Released\"\nI0617 04:53:11.553156 10 pv_controller.go:1348] isVolumeReleased[pvc-c7c7b694-46a9-42cc-9ac0-45b03f47056d]: volume is released\nI0617 04:53:11.564276 10 pv_controller_base.go:533] deletion of claim \"ephemeral-7298/inline-volume-tester-vkcnm-my-volume-0\" was already processed\nE0617 04:53:11.847026 10 pv_controller.go:1459] error finding provisioning plugin for claim volumemode-3087/pvc-xr2dq: storageclass.storage.k8s.io \"volumemode-3087\" not found\nI0617 04:53:11.847584 10 event.go:294] \"Event occurred\" object=\"volumemode-3087/pvc-xr2dq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-3087\\\" not found\"\nI0617 04:53:11.944363 10 namespace_controller.go:185] Namespace has been deleted projected-2273\nI0617 04:53:11.957199 10 pv_controller.go:887] volume \"local-llf6g\" entered phase \"Available\"\nE0617 04:53:12.684476 10 tokens_controller.go:262] error synchronizing serviceaccount services-7255/default: secrets \"default-token-z5fr7\" is forbidden: unable to create new content in namespace services-7255 because it is being terminated\nI0617 04:53:12.753513 10 garbagecollector.go:468] \"Processing object\" object=\"services-7255/svc-tolerate-unready-dj2r8\" objectUID=5bf1215a-68b8-4bf8-a331-801d189e58e4 kind=\"EndpointSlice\" virtual=false\nI0617 04:53:12.758016 10 garbagecollector.go:580] \"Deleting object\" object=\"services-7255/svc-tolerate-unready-dj2r8\" objectUID=5bf1215a-68b8-4bf8-a331-801d189e58e4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0617 04:53:13.234670 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-c7c7b694-46a9-42cc-9ac0-45b03f47056d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^2f68bcf4-edf9-11ec-a365-66fc70675f4a\") on node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:53:13.238514 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-c7c7b694-46a9-42cc-9ac0-45b03f47056d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^2f68bcf4-edf9-11ec-a365-66fc70675f4a\") on node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nI0617 04:53:13.617381 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1947/pod-96e1bd14-1d88-476d-842a-5ef0f79e8862\" PVC=\"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:53:13.617407 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:53:13.620795 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1947/pod-96e1bd14-1d88-476d-842a-5ef0f79e8862\" PVC=\"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:53:13.620814 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:53:13.630068 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-1947/pvc-t9nn7\"\nI0617 04:53:13.640695 10 pv_controller.go:648] volume \"local-pv52qmv\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:53:13.643935 10 pv_controller.go:887] volume \"local-pv52qmv\" entered phase \"Released\"\nI0617 04:53:13.661354 10 pv_controller_base.go:533] deletion of claim \"persistent-local-volumes-test-1947/pvc-t9nn7\" was already processed\nW0617 04:53:13.724325 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:13.724599 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:13.765629 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-c7c7b694-46a9-42cc-9ac0-45b03f47056d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7298^2f68bcf4-edf9-11ec-a365-66fc70675f4a\") on node \"ip-172-20-50-49.eu-west-1.compute.internal\" \nE0617 04:53:13.766715 10 tokens_controller.go:262] error synchronizing serviceaccount gc-3539/default: secrets \"default-token-pl8qc\" is forbidden: unable to create new content in namespace gc-3539 because it is being terminated\nE0617 04:53:14.011127 10 tokens_controller.go:262] error synchronizing serviceaccount metrics-grabber-6048/default: secrets \"default-token-j5gj2\" is forbidden: unable to create new content in namespace metrics-grabber-6048 because it is being terminated\nI0617 04:53:14.156899 10 pv_controller.go:887] volume \"pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d\" entered phase \"Bound\"\nI0617 04:53:14.157083 10 pv_controller.go:990] volume \"pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d\" bound to claim \"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\"\nI0617 04:53:14.166654 10 pv_controller.go:831] claim \"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" entered phase \"Bound\"\nI0617 04:53:14.695861 10 event.go:294] \"Event occurred\" object=\"volume-9212/awsp2n2l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0617 04:53:14.726245 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1007/inline-volume-tester-pqlz6\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\"\nI0617 04:53:14.726265 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\"\nI0617 04:53:14.726284 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-1007/inline-volume-tester-pqlz6\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\"\nI0617 04:53:14.726291 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\"\nI0617 04:53:14.745631 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-058a9ecb0ed505376\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:14.916700 10 event.go:294] \"Event occurred\" object=\"volume-9212/awsp2n2l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:14.924724 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\"\nI0617 04:53:14.929907 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" objectUID=943eb37c-22db-43f9-b7c7-22e4d24f544c kind=\"Pod\" virtual=false\nI0617 04:53:14.932295 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6-my-volume-1, uid: 42678aaa-c7f8-41ef-acb9-714ff20fadbe] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6, uid: 943eb37c-22db-43f9-b7c7-22e4d24f544c] is deletingDependents\nI0617 04:53:14.932448 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" objectUID=42678aaa-c7f8-41ef-acb9-714ff20fadbe kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:14.932681 10 pv_controller.go:648] volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:53:14.933200 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\"\nI0617 04:53:14.939431 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007/inline-volume-tester-pqlz6\" objectUID=943eb37c-22db-43f9-b7c7-22e4d24f544c kind=\"Pod\" virtual=false\nI0617 04:53:14.939793 10 pv_controller.go:887] volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" entered phase \"Released\"\nI0617 04:53:14.942775 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-1007, name: inline-volume-tester-pqlz6, uid: 943eb37c-22db-43f9-b7c7-22e4d24f544c]\nI0617 04:53:14.944058 10 pv_controller.go:1348] isVolumeReleased[pvc-a74733a6-6921-44c7-8af4-f06fd1723111]: volume is released\nI0617 04:53:14.944104 10 pv_controller.go:648] volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:53:14.955908 10 pv_controller.go:887] volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" entered phase \"Released\"\nI0617 04:53:14.965321 10 pv_controller.go:1348] isVolumeReleased[pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe]: volume is released\nI0617 04:53:14.970695 10 pv_controller_base.go:533] deletion of claim \"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-0\" was already processed\nI0617 04:53:14.978649 10 pv_controller_base.go:533] deletion of claim \"ephemeral-1007/inline-volume-tester-pqlz6-my-volume-1\" was already processed\nE0617 04:53:15.091222 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9369/default: secrets \"default-token-d6xl6\" is forbidden: unable to create new content in namespace provisioning-9369 because it is being terminated\nI0617 04:53:15.337116 10 garbagecollector.go:468] \"Processing object\" object=\"container-probe-4619/startup-46e8061b-e3e3-4803-933a-9b5d8d27d57b\" objectUID=48743ec6-4a8e-48ab-acf2-3278a27c08c2 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:15.339402 10 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-4619/startup-46e8061b-e3e3-4803-933a-9b5d8d27d57b\" objectUID=48743ec6-4a8e-48ab-acf2-3278a27c08c2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:15.523028 10 pv_controller.go:938] claim \"volumemode-3087/pvc-xr2dq\" bound to volume \"local-llf6g\"\nI0617 04:53:15.523472 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:15.523665 10 event.go:294] \"Event occurred\" object=\"volume-9212/awsp2n2l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:15.537081 10 pv_controller.go:887] volume \"local-llf6g\" entered phase \"Bound\"\nI0617 04:53:15.537106 10 pv_controller.go:990] volume \"local-llf6g\" bound to claim \"volumemode-3087/pvc-xr2dq\"\nI0617 04:53:15.550688 10 pv_controller.go:831] claim \"volumemode-3087/pvc-xr2dq\" entered phase \"Bound\"\nI0617 04:53:16.004278 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-9508, name: inline-volume-tester-4cjrx, uid: a5e85613-43e5-4804-873a-1d0412fa4d83] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:53:16.004338 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" objectUID=54ebc89b-eab5-4199-a298-fe5bcf99f9aa kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:16.005086 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx\" objectUID=3ad22c91-a35d-4f5e-af96-27cff46657c5 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:16.005303 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx\" objectUID=a5e85613-43e5-4804-873a-1d0412fa4d83 kind=\"Pod\" virtual=false\nI0617 04:53:16.008944 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-9508, name: inline-volume-tester-4cjrx-my-volume-0, uid: 54ebc89b-eab5-4199-a298-fe5bcf99f9aa] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-9508, name: inline-volume-tester-4cjrx, uid: a5e85613-43e5-4804-873a-1d0412fa4d83] is deletingDependents\nI0617 04:53:16.010544 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" objectUID=54ebc89b-eab5-4199-a298-fe5bcf99f9aa kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:53:16.010774 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx\" objectUID=3ad22c91-a35d-4f5e-af96-27cff46657c5 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:16.014472 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx\" objectUID=a5e85613-43e5-4804-873a-1d0412fa4d83 kind=\"Pod\" virtual=false\nI0617 04:53:16.017484 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-9508/inline-volume-tester-4cjrx\" PVC=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\"\nI0617 04:53:16.017499 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\"\nI0617 04:53:16.018154 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" objectUID=54ebc89b-eab5-4199-a298-fe5bcf99f9aa kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:16.020404 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-9508, name: inline-volume-tester-4cjrx-my-volume-0, uid: 54ebc89b-eab5-4199-a298-fe5bcf99f9aa] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-9508, name: inline-volume-tester-4cjrx, uid: a5e85613-43e5-4804-873a-1d0412fa4d83] is deletingDependents\nI0617 04:53:16.020470 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" objectUID=54ebc89b-eab5-4199-a298-fe5bcf99f9aa kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:17.290005 10 namespace_controller.go:185] Namespace has been deleted ephemeral-1128-8237\nE0617 04:53:17.716663 10 namespace_controller.go:162] deletion of namespace pods-416 failed: unexpected items still remain in namespace: pods-416 for gvr: /v1, Resource=pods\nI0617 04:53:17.726998 10 namespace_controller.go:185] Namespace has been deleted projected-498\nE0617 04:53:17.837346 10 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-1947/default: secrets \"default-token-8n6x8\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-1947 because it is being terminated\nE0617 04:53:18.167037 10 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-7298/default: secrets \"default-token-7pvvz\" is forbidden: unable to create new content in namespace ephemeral-7298 because it is being terminated\nE0617 04:53:18.183037 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nE0617 04:53:18.331126 10 tokens_controller.go:262] error synchronizing serviceaccount endpointslice-1340/default: secrets \"default-token-hr84d\" is forbidden: unable to create new content in namespace endpointslice-1340 because it is being terminated\nI0617 04:53:18.338817 10 pv_controller.go:887] volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" entered phase \"Bound\"\nI0617 04:53:18.338981 10 pv_controller.go:990] volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" bound to claim \"volume-9212/awsp2n2l\"\nI0617 04:53:18.347959 10 pv_controller.go:831] claim \"volume-9212/awsp2n2l\" entered phase \"Bound\"\nW0617 04:53:18.459699 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:18.459721 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:18.507006 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a264cfc-edf9-11ec-83e4-befb265ca60a\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:53:18.542022 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a264cfc-edf9-11ec-83e4-befb265ca60a\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:53:18.542565 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a25ff8e-edf9-11ec-83e4-befb265ca60a\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:53:18.548231 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester2-9rxgq to be scheduled\"\nI0617 04:53:18.551833 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a25ff8e-edf9-11ec-83e4-befb265ca60a\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nE0617 04:53:18.800005 10 tokens_controller.go:262] error synchronizing serviceaccount volume-6943/default: secrets \"default-token-fzcb4\" is forbidden: unable to create new content in namespace volume-6943 because it is being terminated\nI0617 04:53:18.809497 10 namespace_controller.go:185] Namespace has been deleted gc-3539\nI0617 04:53:18.839376 10 namespace_controller.go:185] Namespace has been deleted provisioning-1362\nI0617 04:53:18.947025 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:19.053276 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-42678aaa-c7f8-41ef-acb9-714ff20fadbe\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a264cfc-edf9-11ec-83e4-befb265ca60a\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:53:19.093283 10 namespace_controller.go:185] Namespace has been deleted metrics-grabber-6048\nI0617 04:53:19.093347 10 event.go:294] \"Event occurred\" object=\"ephemeral-7937/inline-volume-tester-zp5h9\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d\\\" \"\nI0617 04:53:19.093362 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-058a9ecb0ed505376\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:19.108139 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-a74733a6-6921-44c7-8af4-f06fd1723111\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-1007^4a25ff8e-edf9-11ec-83e4-befb265ca60a\") on node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nE0617 04:53:19.290395 10 tokens_controller.go:262] error synchronizing serviceaccount custom-resource-definition-6217/default: secrets \"default-token-kmtvp\" is forbidden: unable to create new content in namespace custom-resource-definition-6217 because it is being terminated\nI0617 04:53:19.685503 10 expand_controller.go:292] Ignoring the PVC \"csi-mock-volumes-8776/pvc-7rq55\" (uid: \"6ee0a230-6fa0-4328-881f-8d5019dea07b\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0617 04:53:19.685819 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8776/pvc-7rq55\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0617 04:53:19.696780 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:19.921388 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6227/condition-test\" need=3 creating=3\nI0617 04:53:19.928170 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-lln2s\"\nI0617 04:53:19.939232 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-lm9jm\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0617 04:53:19.942368 10 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-6227/condition-test\nI0617 04:53:19.942515 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-kgv9f\"\nE0617 04:53:19.948486 10 replica_set.go:536] sync \"replicaset-6227/condition-test\" failed with pods \"condition-test-lm9jm\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0617 04:53:19.949320 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6227/condition-test\" need=3 creating=1\nI0617 04:53:19.951724 10 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-6227/condition-test\nI0617 04:53:19.952143 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-f6sst\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0617 04:53:19.955635 10 replica_set.go:536] sync \"replicaset-6227/condition-test\" failed with pods \"condition-test-f6sst\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0617 04:53:19.955723 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6227/condition-test\" need=3 creating=1\nI0617 04:53:19.960080 10 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-6227/condition-test\nI0617 04:53:19.960813 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-crk2v\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0617 04:53:19.972698 10 replica_set.go:536] sync \"replicaset-6227/condition-test\" failed with pods \"condition-test-crk2v\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0617 04:53:19.972752 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6227/condition-test\" need=3 creating=1\nI0617 04:53:19.974140 10 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-6227/condition-test\nE0617 04:53:19.974168 10 replica_set.go:536] sync \"replicaset-6227/condition-test\" failed with pods \"condition-test-wh5ct\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0617 04:53:19.974198 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-wh5ct\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0617 04:53:19.992840 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6227/condition-test\" need=3 creating=1\nI0617 04:53:19.994028 10 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-6227/condition-test\nE0617 04:53:19.994057 10 replica_set.go:536] sync \"replicaset-6227/condition-test\" failed with pods \"condition-test-m6dpm\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0617 04:53:19.994209 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-m6dpm\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0617 04:53:20.074384 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6227/condition-test\" need=3 creating=1\nI0617 04:53:20.076361 10 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-6227/condition-test\nE0617 04:53:20.076476 10 replica_set.go:536] sync \"replicaset-6227/condition-test\" failed with pods \"condition-test-7497v\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0617 04:53:20.076623 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-7497v\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0617 04:53:20.146334 10 namespace_controller.go:185] Namespace has been deleted provisioning-9369\nI0617 04:53:20.237490 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6227/condition-test\" need=3 creating=1\nI0617 04:53:20.239223 10 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-6227/condition-test\nE0617 04:53:20.239280 10 replica_set.go:536] sync \"replicaset-6227/condition-test\" failed with pods \"condition-test-t2dwp\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0617 04:53:20.239453 10 event.go:294] \"Event occurred\" object=\"replicaset-6227/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-t2dwp\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0617 04:53:20.587109 10 tokens_controller.go:262] error synchronizing serviceaccount container-probe-4619/default: secrets \"default-token-vgmkb\" is forbidden: unable to create new content in namespace container-probe-4619 because it is being terminated\nI0617 04:53:21.196358 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:21.196610 10 event.go:294] \"Event occurred\" object=\"volume-9212/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\\\" \"\nI0617 04:53:22.373794 10 namespace_controller.go:185] Namespace has been deleted provisioning-5972-2646\nI0617 04:53:22.413837 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"csi-mock-volumes-8776/pvc-7rq55\"\nI0617 04:53:22.434702 10 pv_controller.go:648] volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:53:22.442018 10 pv_controller.go:887] volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" entered phase \"Released\"\nI0617 04:53:22.444199 10 pv_controller.go:1348] isVolumeReleased[pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b]: volume is released\nI0617 04:53:23.025676 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298-2234/csi-hostpathplugin-58dcbc7c44\" objectUID=2788d46c-b3c7-404e-b879-c39bc5f98740 kind=\"ControllerRevision\" virtual=false\nI0617 04:53:23.025826 10 stateful_set.go:443] StatefulSet has been deleted ephemeral-7298-2234/csi-hostpathplugin\nI0617 04:53:23.025915 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7298-2234/csi-hostpathplugin-0\" objectUID=7eefa754-3814-4e03-b18c-daade360f524 kind=\"Pod\" virtual=false\nI0617 04:53:23.037486 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7298-2234/csi-hostpathplugin-0\" objectUID=7eefa754-3814-4e03-b18c-daade360f524 kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:23.037674 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7298-2234/csi-hostpathplugin-58dcbc7c44\" objectUID=2788d46c-b3c7-404e-b879-c39bc5f98740 kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:53:23.137501 10 pv_controller.go:887] volume \"pvc-71e2c63b-3a88-47d9-b9ea-65b60c96170f\" entered phase \"Bound\"\nI0617 04:53:23.137527 10 pv_controller.go:990] volume \"pvc-71e2c63b-3a88-47d9-b9ea-65b60c96170f\" bound to claim \"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\"\nI0617 04:53:23.138060 10 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1947\nI0617 04:53:23.143806 10 pv_controller.go:831] claim \"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" entered phase \"Bound\"\nI0617 04:53:23.235245 10 namespace_controller.go:185] Namespace has been deleted ephemeral-7298\nI0617 04:53:23.396100 10 namespace_controller.go:185] Namespace has been deleted endpointslice-1340\nI0617 04:53:23.701558 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-71e2c63b-3a88-47d9-b9ea-65b60c96170f\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-04eabd784a689e7fa\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:23.888389 10 namespace_controller.go:185] Namespace has been deleted volume-6943\nI0617 04:53:24.357009 10 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-6217\nI0617 04:53:24.443182 10 namespace_controller.go:185] Namespace has been deleted pods-8482\nW0617 04:53:25.161422 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:25.161444 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:25.587522 10 garbagecollector.go:468] \"Processing object\" object=\"replicaset-6227/condition-test-lln2s\" objectUID=39896ad4-1f9a-48f0-8ddc-4f832eb7ef15 kind=\"Pod\" virtual=false\nI0617 04:53:25.587744 10 garbagecollector.go:468] \"Processing object\" object=\"replicaset-6227/condition-test-kgv9f\" objectUID=3fea36d0-37e8-4210-b8d5-004172c29d48 kind=\"Pod\" virtual=false\nI0617 04:53:25.589500 10 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-6227/condition-test-lln2s\" objectUID=39896ad4-1f9a-48f0-8ddc-4f832eb7ef15 kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:25.589809 10 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-6227/condition-test-kgv9f\" objectUID=3fea36d0-37e8-4210-b8d5-004172c29d48 kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:25.629163 10 resource_quota_controller.go:311] Resource quota has been deleted replicaset-6227/condition-test\nE0617 04:53:25.657844 10 tokens_controller.go:262] error synchronizing serviceaccount replicaset-6227/default: secrets \"default-token-g42cr\" is forbidden: unable to create new content in namespace replicaset-6227 because it is being terminated\nI0617 04:53:25.662960 10 namespace_controller.go:185] Namespace has been deleted container-probe-4619\nI0617 04:53:25.903081 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-71e2c63b-3a88-47d9-b9ea-65b60c96170f\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-04eabd784a689e7fa\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:25.903297 10 event.go:294] \"Event occurred\" object=\"ephemeral-66/inline-volume-tester2-9rxgq\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-71e2c63b-3a88-47d9-b9ea-65b60c96170f\\\" \"\nI0617 04:53:26.134817 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"volumemode-3087/pvc-xr2dq\"\nI0617 04:53:26.147754 10 pv_controller.go:648] volume \"local-llf6g\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:53:26.151258 10 pv_controller.go:887] volume \"local-llf6g\" entered phase \"Released\"\nI0617 04:53:26.244927 10 pv_controller_base.go:533] deletion of claim \"volumemode-3087/pvc-xr2dq\" was already processed\nI0617 04:53:26.837492 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8776^4\") on node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:53:26.841273 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8776^4\") on node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:53:27.345127 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-6ee0a230-6fa0-4328-881f-8d5019dea07b\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8776^4\") on node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:53:27.485276 10 namespace_controller.go:185] Namespace has been deleted ephemeral-1007\nI0617 04:53:27.492390 10 stateful_set.go:443] StatefulSet has been deleted ephemeral-1007-3379/csi-hostpathplugin\nI0617 04:53:27.492468 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007-3379/csi-hostpathplugin-d665f664f\" objectUID=9b8f3001-c5a3-48e8-993c-fc5650e90c92 kind=\"ControllerRevision\" virtual=false\nI0617 04:53:27.492532 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-1007-3379/csi-hostpathplugin-0\" objectUID=1c330ec9-0a00-4e8c-86ef-afc23e5934e4 kind=\"Pod\" virtual=false\nI0617 04:53:27.494660 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1007-3379/csi-hostpathplugin-d665f664f\" objectUID=9b8f3001-c5a3-48e8-993c-fc5650e90c92 kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:53:27.494932 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-1007-3379/csi-hostpathplugin-0\" objectUID=1c330ec9-0a00-4e8c-86ef-afc23e5934e4 kind=\"Pod\" propagationPolicy=Background\nW0617 04:53:27.663489 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:27.663564 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:28.745482 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nE0617 04:53:28.851603 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nE0617 04:53:28.996350 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nE0617 04:53:29.111965 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nE0617 04:53:29.239323 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nE0617 04:53:29.420605 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nI0617 04:53:29.474703 10 pv_controller_base.go:533] deletion of claim \"csi-mock-volumes-8776/pvc-7rq55\" was already processed\nI0617 04:53:29.699216 10 garbagecollector.go:468] \"Processing object\" object=\"kubectl-5564/pause\" objectUID=1306a2cf-a258-4178-907f-9ac5c36b6e03 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:29.705321 10 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-5564/pause\" objectUID=1306a2cf-a258-4178-907f-9ac5c36b6e03 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0617 04:53:29.725320 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nW0617 04:53:29.905365 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:29.905386 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:30.212533 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nI0617 04:53:30.523260 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:53:30.722725 10 namespace_controller.go:185] Namespace has been deleted replicaset-6227\nE0617 04:53:30.939850 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nI0617 04:53:31.547524 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-616/pause-pod-579f5c859c\" need=2 creating=2\nI0617 04:53:31.548127 10 event.go:294] \"Event occurred\" object=\"services-616/pause-pod\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set pause-pod-579f5c859c to 2\"\nI0617 04:53:31.553201 10 event.go:294] \"Event occurred\" object=\"services-616/pause-pod-579f5c859c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pause-pod-579f5c859c-68987\"\nI0617 04:53:31.560047 10 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"services-616/pause-pod\" err=\"Operation cannot be fulfilled on deployments.apps \\\"pause-pod\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:53:31.560668 10 event.go:294] \"Event occurred\" object=\"services-616/pause-pod-579f5c859c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pause-pod-579f5c859c-gnb9q\"\nE0617 04:53:32.311412 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nI0617 04:53:33.404473 10 namespace_controller.go:185] Namespace has been deleted ephemeral-7298-2234\nE0617 04:53:33.513894 10 pv_controller.go:1459] error finding provisioning plugin for claim provisioning-4388/pvc-9lpvd: storageclass.storage.k8s.io \"provisioning-4388\" not found\nI0617 04:53:33.514165 10 event.go:294] \"Event occurred\" object=\"provisioning-4388/pvc-9lpvd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4388\\\" not found\"\nI0617 04:53:33.623230 10 pv_controller.go:887] volume \"local-2czb9\" entered phase \"Available\"\nI0617 04:53:33.976504 10 namespace_controller.go:185] Namespace has been deleted proxy-1395\nE0617 04:53:34.954084 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nE0617 04:53:36.709082 10 namespace_controller.go:162] deletion of namespace disruption-4174 failed: unexpected items still remain in namespace: disruption-4174 for gvr: /v1, Resource=pods\nW0617 04:53:36.833053 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:36.833087 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:37.001953 10 pv_controller.go:1459] error finding provisioning plugin for claim provisioning-5267/pvc-nvscq: storageclass.storage.k8s.io \"provisioning-5267\" not found\nI0617 04:53:37.002285 10 event.go:294] \"Event occurred\" object=\"provisioning-5267/pvc-nvscq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5267\\\" not found\"\nI0617 04:53:37.113127 10 pv_controller.go:887] volume \"local-96d6m\" entered phase \"Available\"\nI0617 04:53:37.443627 10 namespace_controller.go:185] Namespace has been deleted volumemode-3087\nI0617 04:53:37.882740 10 namespace_controller.go:185] Namespace has been deleted ephemeral-1007-3379\nI0617 04:53:38.657591 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-7937, name: inline-volume-tester-zp5h9, uid: 4cc13b07-dd09-4b35-8cfe-55c8ab47e234] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:53:38.657655 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" objectUID=43ced6ca-3db8-4ac1-9e63-85518483cd8d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:38.658042 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9\" objectUID=8ea3ef95-f912-469b-9e0e-cebee1b362cc kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:38.658170 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9\" objectUID=4cc13b07-dd09-4b35-8cfe-55c8ab47e234 kind=\"Pod\" virtual=false\nI0617 04:53:38.663144 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7937, name: inline-volume-tester-zp5h9-my-volume-0, uid: 43ced6ca-3db8-4ac1-9e63-85518483cd8d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7937, name: inline-volume-tester-zp5h9, uid: 4cc13b07-dd09-4b35-8cfe-55c8ab47e234] is deletingDependents\nI0617 04:53:38.665634 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9\" objectUID=8ea3ef95-f912-469b-9e0e-cebee1b362cc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:38.665822 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" objectUID=43ced6ca-3db8-4ac1-9e63-85518483cd8d kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:53:38.677075 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9\" objectUID=4cc13b07-dd09-4b35-8cfe-55c8ab47e234 kind=\"Pod\" virtual=false\nI0617 04:53:38.681022 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" objectUID=43ced6ca-3db8-4ac1-9e63-85518483cd8d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:38.686365 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-7937/inline-volume-tester-zp5h9\" PVC=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\"\nI0617 04:53:38.686828 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\"\nI0617 04:53:38.686412 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7937, name: inline-volume-tester-zp5h9-my-volume-0, uid: 43ced6ca-3db8-4ac1-9e63-85518483cd8d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7937, name: inline-volume-tester-zp5h9, uid: 4cc13b07-dd09-4b35-8cfe-55c8ab47e234] is deletingDependents\nI0617 04:53:38.687131 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\" objectUID=43ced6ca-3db8-4ac1-9e63-85518483cd8d kind=\"PersistentVolumeClaim\" virtual=false\nE0617 04:53:38.787162 10 namespace_controller.go:162] deletion of namespace apply-1843 failed: unexpected items still remain in namespace: apply-1843 for gvr: /v1, Resource=pods\nW0617 04:53:39.806589 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:39.806786 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:40.138968 10 pv_controller.go:887] volume \"local-pvbhl82\" entered phase \"Available\"\nI0617 04:53:40.224375 10 pv_controller.go:938] claim \"persistent-local-volumes-test-7862/pvc-7w4bk\" bound to volume \"local-pvbhl82\"\nI0617 04:53:40.245362 10 pv_controller.go:887] volume \"local-pvbhl82\" entered phase \"Bound\"\nI0617 04:53:40.245386 10 pv_controller.go:990] volume \"local-pvbhl82\" bound to claim \"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:40.264560 10 pv_controller.go:831] claim \"persistent-local-volumes-test-7862/pvc-7w4bk\" entered phase \"Bound\"\nI0617 04:53:40.350506 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-68c688747f\" objectUID=2e087a2d-dde4-4a17-aa2b-391af395c73a kind=\"ControllerRevision\" virtual=false\nI0617 04:53:40.350702 10 stateful_set.go:443] StatefulSet has been deleted csi-mock-volumes-8776-2746/csi-mockplugin\nI0617 04:53:40.350742 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-0\" objectUID=52badfcb-fc68-4460-b4e6-394ee6d24d3a kind=\"Pod\" virtual=false\nI0617 04:53:40.353174 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-0\" objectUID=52badfcb-fc68-4460-b4e6-394ee6d24d3a kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:40.353291 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-68c688747f\" objectUID=2e087a2d-dde4-4a17-aa2b-391af395c73a kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:53:40.368046 10 garbagecollector.go:468] \"Processing object\" object=\"services-616/pause-pod-579f5c859c\" objectUID=f67fb0c8-3d40-481f-a40b-cd94c9885192 kind=\"ReplicaSet\" virtual=false\nI0617 04:53:40.368231 10 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"services-616/pause-pod\"\nI0617 04:53:40.372968 10 garbagecollector.go:580] \"Deleting object\" object=\"services-616/pause-pod-579f5c859c\" objectUID=f67fb0c8-3d40-481f-a40b-cd94c9885192 kind=\"ReplicaSet\" propagationPolicy=Background\nI0617 04:53:40.379305 10 garbagecollector.go:468] \"Processing object\" object=\"services-616/pause-pod-579f5c859c-gnb9q\" objectUID=723cbd2e-64d7-4bac-9e1e-8f3dfaa57763 kind=\"Pod\" virtual=false\nI0617 04:53:40.380068 10 garbagecollector.go:468] \"Processing object\" object=\"services-616/pause-pod-579f5c859c-68987\" objectUID=b6a30786-6950-4b8e-b288-b3fb10be282b kind=\"Pod\" virtual=false\nI0617 04:53:40.382383 10 garbagecollector.go:580] \"Deleting object\" object=\"services-616/pause-pod-579f5c859c-68987\" objectUID=b6a30786-6950-4b8e-b288-b3fb10be282b kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:40.383785 10 garbagecollector.go:580] \"Deleting object\" object=\"services-616/pause-pod-579f5c859c-gnb9q\" objectUID=723cbd2e-64d7-4bac-9e1e-8f3dfaa57763 kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:40.392115 10 garbagecollector.go:468] \"Processing object\" object=\"services-616/pause-pod-579f5c859c-68987\" objectUID=2d080bd2-7184-4cd6-8d62-96c2dfa0346a kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:40.397051 10 garbagecollector.go:468] \"Processing object\" object=\"services-616/pause-pod-579f5c859c-gnb9q\" objectUID=188a1a76-f2a3-4373-9375-220f977c964c kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:40.397412 10 garbagecollector.go:580] \"Deleting object\" object=\"services-616/pause-pod-579f5c859c-68987\" objectUID=2d080bd2-7184-4cd6-8d62-96c2dfa0346a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:40.401309 10 garbagecollector.go:580] \"Deleting object\" object=\"services-616/pause-pod-579f5c859c-gnb9q\" objectUID=188a1a76-f2a3-4373-9375-220f977c964c kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0617 04:53:40.437752 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nI0617 04:53:40.456878 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-attacher-7c498bffc7\" objectUID=5d861a56-3b2b-4c9b-a64d-4c3c9bd7ee85 kind=\"ControllerRevision\" virtual=false\nI0617 04:53:40.457309 10 stateful_set.go:443] StatefulSet has been deleted csi-mock-volumes-8776-2746/csi-mockplugin-attacher\nI0617 04:53:40.457353 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-attacher-0\" objectUID=7961417e-f469-40c2-a9a4-fd2f3c106fbe kind=\"Pod\" virtual=false\nI0617 04:53:40.459428 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-attacher-7c498bffc7\" objectUID=5d861a56-3b2b-4c9b-a64d-4c3c9bd7ee85 kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:53:40.459676 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-attacher-0\" objectUID=7961417e-f469-40c2-a9a4-fd2f3c106fbe kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:40.482775 10 garbagecollector.go:468] \"Processing object\" object=\"services-616/echo-sourceip\" objectUID=ca65ddb7-b6fe-4136-ba86-f47186783a10 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:40.489643 10 garbagecollector.go:580] \"Deleting object\" object=\"services-616/echo-sourceip\" objectUID=ca65ddb7-b6fe-4136-ba86-f47186783a10 kind=\"CiliumEndpoint\" propagationPolicy=Background\nW0617 04:53:40.495969 10 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-616/sourceip-test\", retrying. Error: EndpointSlice informer cache is out of date\nI0617 04:53:40.496069 10 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"services-616/sourceip-test\" err=\"Operation cannot be fulfilled on endpoints \\\"sourceip-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:53:40.496536 10 event.go:294] \"Event occurred\" object=\"services-616/sourceip-test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-616/sourceip-test: Operation cannot be fulfilled on endpoints \\\"sourceip-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:53:40.591476 10 garbagecollector.go:468] \"Processing object\" object=\"services-616/sourceip-test-96krg\" objectUID=8c8a3017-24f9-4e7f-bc6e-5fbcbd0579fc kind=\"EndpointSlice\" virtual=false\nI0617 04:53:40.594275 10 garbagecollector.go:580] \"Deleting object\" object=\"services-616/sourceip-test-96krg\" objectUID=8c8a3017-24f9-4e7f-bc6e-5fbcbd0579fc kind=\"EndpointSlice\" propagationPolicy=Background\nI0617 04:53:40.608295 10 stateful_set.go:443] StatefulSet has been deleted csi-mock-volumes-8776-2746/csi-mockplugin-resizer\nI0617 04:53:40.608520 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-resizer-0\" objectUID=99f8af13-a7cb-4736-b0e7-555547a4b41f kind=\"Pod\" virtual=false\nI0617 04:53:40.608829 10 garbagecollector.go:468] \"Processing object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-resizer-54f847bf54\" objectUID=ce17f757-7317-40d2-b496-8bbf95e11dec kind=\"ControllerRevision\" virtual=false\nI0617 04:53:40.610485 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-resizer-0\" objectUID=99f8af13-a7cb-4736-b0e7-555547a4b41f kind=\"Pod\" propagationPolicy=Background\nI0617 04:53:40.610801 10 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8776-2746/csi-mockplugin-resizer-54f847bf54\" objectUID=ce17f757-7317-40d2-b496-8bbf95e11dec kind=\"ControllerRevision\" propagationPolicy=Background\nI0617 04:53:40.710741 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-66, name: inline-volume-tester2-9rxgq, uid: 63f3f518-a32c-49d8-90ec-4e2f5d98c984] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:53:40.710813 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" objectUID=71e2c63b-3a88-47d9-b9ea-65b60c96170f kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:40.711352 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq\" objectUID=674d03fb-cde3-4a77-bc63-de550d226b59 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:40.711750 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq\" objectUID=63f3f518-a32c-49d8-90ec-4e2f5d98c984 kind=\"Pod\" virtual=false\nI0617 04:53:40.717972 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-66, name: inline-volume-tester2-9rxgq-my-volume-0, uid: 71e2c63b-3a88-47d9-b9ea-65b60c96170f] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-66, name: inline-volume-tester2-9rxgq, uid: 63f3f518-a32c-49d8-90ec-4e2f5d98c984] is deletingDependents\nI0617 04:53:40.721840 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq\" objectUID=674d03fb-cde3-4a77-bc63-de550d226b59 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:40.722069 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" objectUID=71e2c63b-3a88-47d9-b9ea-65b60c96170f kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:53:40.727211 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" objectUID=71e2c63b-3a88-47d9-b9ea-65b60c96170f kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:40.727818 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-66/inline-volume-tester2-9rxgq\" PVC=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\"\nI0617 04:53:40.727831 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\"\nI0617 04:53:40.729696 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq\" objectUID=63f3f518-a32c-49d8-90ec-4e2f5d98c984 kind=\"Pod\" virtual=false\nI0617 04:53:40.731957 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-66, name: inline-volume-tester2-9rxgq-my-volume-0, uid: 71e2c63b-3a88-47d9-b9ea-65b60c96170f] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-66, name: inline-volume-tester2-9rxgq, uid: 63f3f518-a32c-49d8-90ec-4e2f5d98c984] is deletingDependents\nI0617 04:53:40.732956 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" objectUID=71e2c63b-3a88-47d9-b9ea-65b60c96170f kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:53:40.735322 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-66/inline-volume-tester2-9rxgq-my-volume-0\" objectUID=71e2c63b-3a88-47d9-b9ea-65b60c96170f kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:41.042326 10 namespace_controller.go:185] Namespace has been deleted kubectl-5564\nI0617 04:53:41.276931 10 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8776\nE0617 04:53:41.640614 10 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-633/default: secrets \"default-token-r94vk\" is forbidden: unable to create new content in namespace resourcequota-633 because it is being terminated\nI0617 04:53:41.669134 10 resource_quota_controller.go:311] Resource quota has been deleted resourcequota-633/test-quota\nE0617 04:53:41.752804 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-8422/inline-volume-xnxz6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:53:41.753109 10 event.go:294] \"Event occurred\" object=\"ephemeral-8422/inline-volume-xnxz6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:53:42.069130 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8422, name: inline-volume-xnxz6, uid: bd3070bd-7365-41e1-b05a-b423d8825b51] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:53:42.069419 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-xnxz6-my-volume\" objectUID=5af5b9f3-3d2b-4cf4-8217-1a5b1d8e7525 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:42.069846 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-xnxz6\" objectUID=bd3070bd-7365-41e1-b05a-b423d8825b51 kind=\"Pod\" virtual=false\nI0617 04:53:42.071712 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8422, name: inline-volume-xnxz6-my-volume, uid: 5af5b9f3-3d2b-4cf4-8217-1a5b1d8e7525] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8422, name: inline-volume-xnxz6, uid: bd3070bd-7365-41e1-b05a-b423d8825b51] is deletingDependents\nI0617 04:53:42.073914 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8422/inline-volume-xnxz6-my-volume\" objectUID=5af5b9f3-3d2b-4cf4-8217-1a5b1d8e7525 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0617 04:53:42.077992 10 pv_controller.go:1459] error finding provisioning plugin for claim ephemeral-8422/inline-volume-xnxz6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0617 04:53:42.078300 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-xnxz6-my-volume\" objectUID=5af5b9f3-3d2b-4cf4-8217-1a5b1d8e7525 kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:53:42.078397 10 event.go:294] \"Event occurred\" object=\"ephemeral-8422/inline-volume-xnxz6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0617 04:53:42.081325 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-8422/inline-volume-xnxz6-my-volume\"\nI0617 04:53:42.084490 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-xnxz6\" objectUID=bd3070bd-7365-41e1-b05a-b423d8825b51 kind=\"Pod\" virtual=false\nI0617 04:53:42.085987 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-8422, name: inline-volume-xnxz6, uid: bd3070bd-7365-41e1-b05a-b423d8825b51]\nI0617 04:53:43.551885 10 garbagecollector.go:468] \"Processing object\" object=\"kubectl-7336/httpd\" objectUID=5ac78d4e-fafa-425f-abd6-4e86acefc37e kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:43.555195 10 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-7336/httpd\" objectUID=5ac78d4e-fafa-425f-abd6-4e86acefc37e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:45.504833 10 namespace_controller.go:185] Namespace has been deleted pod-network-test-8693\nI0617 04:53:45.523383 10 pv_controller.go:938] claim \"provisioning-4388/pvc-9lpvd\" bound to volume \"local-2czb9\"\nI0617 04:53:45.531449 10 pv_controller.go:887] volume \"local-2czb9\" entered phase \"Bound\"\nI0617 04:53:45.531668 10 pv_controller.go:990] volume \"local-2czb9\" bound to claim \"provisioning-4388/pvc-9lpvd\"\nI0617 04:53:45.537642 10 pv_controller.go:831] claim \"provisioning-4388/pvc-9lpvd\" entered phase \"Bound\"\nI0617 04:53:45.538172 10 pv_controller.go:938] claim \"provisioning-5267/pvc-nvscq\" bound to volume \"local-96d6m\"\nI0617 04:53:45.545965 10 pv_controller.go:887] volume \"local-96d6m\" entered phase \"Bound\"\nI0617 04:53:45.545987 10 pv_controller.go:990] volume \"local-96d6m\" bound to claim \"provisioning-5267/pvc-nvscq\"\nI0617 04:53:45.551717 10 pv_controller.go:831] claim \"provisioning-5267/pvc-nvscq\" entered phase \"Bound\"\nI0617 04:53:45.552301 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nW0617 04:53:45.617133 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:45.617157 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:46.398649 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-9508/inline-volume-tester-4cjrx\" PVC=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\"\nI0617 04:53:46.398669 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\"\nW0617 04:53:46.463554 10 reconciler.go:344] Multi-Attach error for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-46-241.eu-west-1.compute.internal and can't be attached to another\nI0617 04:53:46.463870 10 event.go:294] \"Event occurred\" object=\"volume-9212/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0617 04:53:46.689000 10 namespace_controller.go:185] Namespace has been deleted resourcequota-633\nI0617 04:53:46.702902 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\"\nI0617 04:53:46.707844 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-9508/inline-volume-tester-4cjrx\" objectUID=a5e85613-43e5-4804-873a-1d0412fa4d83 kind=\"Pod\" virtual=false\nI0617 04:53:46.710823 10 pv_controller.go:648] volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:53:46.720821 10 pv_controller.go:887] volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" entered phase \"Released\"\nI0617 04:53:46.726219 10 pv_controller.go:1348] isVolumeReleased[pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa]: volume is released\nI0617 04:53:46.726292 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-9508, name: inline-volume-tester-4cjrx, uid: a5e85613-43e5-4804-873a-1d0412fa4d83]\nI0617 04:53:47.235468 10 event.go:294] \"Event occurred\" object=\"ephemeral-8422-2270/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0617 04:53:47.554838 10 event.go:294] \"Event occurred\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-qccgk to be scheduled\"\nI0617 04:53:47.826000 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5510/my-hostname-basic-c837e5c8-c44d-4d0e-8498-3b055b1efa42\" need=1 creating=1\nI0617 04:53:47.834982 10 event.go:294] \"Event occurred\" object=\"replicaset-5510/my-hostname-basic-c837e5c8-c44d-4d0e-8498-3b055b1efa42\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: my-hostname-basic-c837e5c8-c44d-4d0e-8498-3b055b1efa42-6zvv9\"\nI0617 04:53:48.716653 10 event.go:294] \"Event occurred\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8422\\\" or manually created by system administrator\"\nI0617 04:53:48.716952 10 event.go:294] \"Event occurred\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8422\\\" or manually created by system administrator\"\nE0617 04:53:49.820402 10 tokens_controller.go:262] error synchronizing serviceaccount kubectl-7336/default: secrets \"default-token-chpw5\" is forbidden: unable to create new content in namespace kubectl-7336 because it is being terminated\nE0617 04:53:50.706897 10 tokens_controller.go:262] error synchronizing serviceaccount port-forwarding-2365/default: secrets \"default-token-zx9wb\" is forbidden: unable to create new content in namespace port-forwarding-2365 because it is being terminated\nE0617 04:53:50.820792 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nI0617 04:53:50.927928 10 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8776-2746\nI0617 04:53:50.999238 10 namespace_controller.go:185] Namespace has been deleted services-616\nI0617 04:53:51.864140 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9204-4647/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0617 04:53:51.969975 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9204-4647/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE0617 04:53:52.046759 10 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1048/default: secrets \"default-token-hvz7q\" is forbidden: unable to create new content in namespace provisioning-1048 because it is being terminated\nW0617 04:53:52.072553 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:52.072710 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:52.215982 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7862/pod-e946faca-75b7-4343-9ce5-c200bd71ee87\" PVC=\"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:52.216177 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:52.698382 10 pv_controller.go:887] volume \"pvc-0e01d4b2-fe57-4b38-b4e7-4316c2edd93d\" entered phase \"Bound\"\nI0617 04:53:52.698555 10 pv_controller.go:990] volume \"pvc-0e01d4b2-fe57-4b38-b4e7-4316c2edd93d\" bound to claim \"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\"\nI0617 04:53:52.704716 10 pv_controller.go:831] claim \"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" entered phase \"Bound\"\nI0617 04:53:52.808544 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-0e01d4b2-fe57-4b38-b4e7-4316c2edd93d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8422^7c1f9649-edf9-11ec-a161-daa7a98fc145\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:53.253851 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-5267/pvc-nvscq\"\nI0617 04:53:53.259011 10 pv_controller.go:648] volume \"local-96d6m\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:53:53.261888 10 pv_controller.go:887] volume \"local-96d6m\" entered phase \"Released\"\nI0617 04:53:53.339853 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-0e01d4b2-fe57-4b38-b4e7-4316c2edd93d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8422^7c1f9649-edf9-11ec-a161-daa7a98fc145\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:53.340001 10 event.go:294] \"Event occurred\" object=\"ephemeral-8422/inline-volume-tester-qccgk\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-0e01d4b2-fe57-4b38-b4e7-4316c2edd93d\\\" \"\nI0617 04:53:53.364151 10 pv_controller_base.go:533] deletion of claim \"provisioning-5267/pvc-nvscq\" was already processed\nI0617 04:53:53.968409 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"provisioning-4388/pvc-9lpvd\"\nI0617 04:53:53.973188 10 pv_controller.go:648] volume \"local-2czb9\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:53:53.978808 10 pv_controller.go:887] volume \"local-2czb9\" entered phase \"Released\"\nI0617 04:53:54.077075 10 pv_controller_base.go:533] deletion of claim \"provisioning-4388/pvc-9lpvd\" was already processed\nI0617 04:53:54.378531 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7862/pod-e946faca-75b7-4343-9ce5-c200bd71ee87\" PVC=\"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:54.378571 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:54.576693 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7862/pod-e946faca-75b7-4343-9ce5-c200bd71ee87\" PVC=\"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:54.576714 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:54.582074 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-7862/pvc-7w4bk\"\nI0617 04:53:54.587095 10 pv_controller.go:648] volume \"local-pvbhl82\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:53:54.590133 10 pv_controller.go:887] volume \"local-pvbhl82\" entered phase \"Released\"\nI0617 04:53:54.593637 10 pv_controller_base.go:533] deletion of claim \"persistent-local-volumes-test-7862/pvc-7w4bk\" was already processed\nI0617 04:53:54.896908 10 namespace_controller.go:185] Namespace has been deleted kubectl-7336\nW0617 04:53:54.901519 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:54.901541 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:55.438204 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00b2aeffaa7f9be55\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:55.446297 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00b2aeffaa7f9be55\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:55.452782 10 reconciler.go:221] attacherDetacher.DetachVolume started for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:53:55.455897 10 operation_generator.go:1641] Verified volume is safe to detach for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nW0617 04:53:55.915867 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:53:55.915955 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:53:56.862584 10 garbagecollector.go:468] \"Processing object\" object=\"container-probe-4939/liveness-a187e0a7-dd49-4069-90c0-892095253c53\" objectUID=7dfd7dab-cef5-4a1c-a42b-313837934741 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:53:56.866818 10 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-4939/liveness-a187e0a7-dd49-4069-90c0-892095253c53\" objectUID=7dfd7dab-cef5-4a1c-a42b-313837934741 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:53:57.141603 10 namespace_controller.go:185] Namespace has been deleted provisioning-1048\nE0617 04:53:57.802287 10 tokens_controller.go:262] error synchronizing serviceaccount emptydir-2834/default: secrets \"default-token-zt4k5\" is forbidden: unable to create new content in namespace emptydir-2834 because it is being terminated\nI0617 04:53:58.130992 10 pv_controller.go:887] volume \"hostpath-dhmrz\" entered phase \"Available\"\nI0617 04:53:58.447709 10 pv_controller.go:938] claim \"pv-protection-4707/pvc-phjjx\" bound to volume \"hostpath-dhmrz\"\nI0617 04:53:58.454693 10 pv_controller.go:887] volume \"hostpath-dhmrz\" entered phase \"Bound\"\nI0617 04:53:58.454716 10 pv_controller.go:990] volume \"hostpath-dhmrz\" bound to claim \"pv-protection-4707/pvc-phjjx\"\nI0617 04:53:58.460671 10 pv_controller.go:831] claim \"pv-protection-4707/pvc-phjjx\" entered phase \"Bound\"\nE0617 04:53:58.675650 10 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-7862/default: secrets \"default-token-khx8w\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-7862 because it is being terminated\nI0617 04:53:58.880502 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"pv-protection-4707/pvc-phjjx\"\nI0617 04:53:58.886075 10 pv_controller.go:648] volume \"hostpath-dhmrz\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:53:58.889441 10 pv_controller.go:887] volume \"hostpath-dhmrz\" entered phase \"Released\"\nI0617 04:53:58.898847 10 pv_controller_base.go:533] deletion of claim \"pv-protection-4707/pvc-phjjx\" was already processed\nI0617 04:54:00.117056 10 event.go:294] \"Event occurred\" object=\"cronjob-2817/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27590694\"\nI0617 04:54:00.117173 10 job_controller.go:498] enqueueing job cronjob-2817/concurrent-27590694\nI0617 04:54:00.160404 10 job_controller.go:498] enqueueing job cronjob-2817/concurrent-27590694\nI0617 04:54:00.168348 10 event.go:294] \"Event occurred\" object=\"cronjob-2817/concurrent-27590694\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27590694-hfg4z\"\nI0617 04:54:00.184748 10 job_controller.go:498] enqueueing job cronjob-2817/concurrent-27590694\nI0617 04:54:00.206902 10 job_controller.go:498] enqueueing job cronjob-2817/concurrent-27590694\nI0617 04:54:00.526363 10 event.go:294] \"Event occurred\" object=\"volume-provisioning-106/pvc-hv5dl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:54:00.535931 10 pv_controller.go:1348] isVolumeReleased[pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa]: volume is released\nE0617 04:54:00.626638 10 tokens_controller.go:262] error synchronizing serviceaccount nettest-4226/default: secrets \"default-token-kn59w\" is forbidden: unable to create new content in namespace nettest-4226 because it is being terminated\nI0617 04:54:00.770947 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5510/my-hostname-basic-c837e5c8-c44d-4d0e-8498-3b055b1efa42\" need=1 creating=1\nI0617 04:54:00.773868 10 garbagecollector.go:468] \"Processing object\" object=\"replicaset-5510/my-hostname-basic-c837e5c8-c44d-4d0e-8498-3b055b1efa42-6zvv9\" objectUID=d82970cf-41dd-4a8a-bb0c-dcfd734bc163 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:54:00.776701 10 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-5510/my-hostname-basic-c837e5c8-c44d-4d0e-8498-3b055b1efa42-6zvv9\" objectUID=d82970cf-41dd-4a8a-bb0c-dcfd734bc163 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:54:00.838194 10 garbagecollector.go:468] \"Processing object\" object=\"cronjob-2817/concurrent-27590694\" objectUID=ecef8c1b-bb3f-4da6-815b-021837f10949 kind=\"Job\" virtual=false\nI0617 04:54:00.843007 10 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-2817/concurrent-27590694\" objectUID=ecef8c1b-bb3f-4da6-815b-021837f10949 kind=\"Job\" propagationPolicy=Background\nI0617 04:54:00.845734 10 garbagecollector.go:468] \"Processing object\" object=\"cronjob-2817/concurrent-27590694-hfg4z\" objectUID=9f9eaad3-1de8-4598-90c8-2108e8243aa3 kind=\"Pod\" virtual=false\nI0617 04:54:00.845958 10 job_controller.go:498] enqueueing job cronjob-2817/concurrent-27590694\nE0617 04:54:00.846031 10 tracking_utils.go:109] \"deleting tracking annotation UID expectations\" err=\"couldn't create key for object cronjob-2817/concurrent-27590694: could not find key for obj \\\"cronjob-2817/concurrent-27590694\\\"\" job=\"cronjob-2817/concurrent-27590694\"\nI0617 04:54:00.847612 10 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-2817/concurrent-27590694-hfg4z\" objectUID=9f9eaad3-1de8-4598-90c8-2108e8243aa3 kind=\"Pod\" propagationPolicy=Background\nI0617 04:54:00.897585 10 namespace_controller.go:185] Namespace has been deleted port-forwarding-2365\nI0617 04:54:01.053180 10 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8422, name: inline-volume-tester-qccgk, uid: 660762c0-ff5a-465d-b48d-6ebadb66e77c] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0617 04:54:01.053544 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" objectUID=0e01d4b2-fe57-4b38-b4e7-4316c2edd93d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:54:01.053855 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-tester-qccgk\" objectUID=870e2247-0b65-4007-ba73-e2aa736d5109 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:54:01.054096 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-tester-qccgk\" objectUID=660762c0-ff5a-465d-b48d-6ebadb66e77c kind=\"Pod\" virtual=false\nI0617 04:54:01.057844 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8422, name: inline-volume-tester-qccgk-my-volume-0, uid: 0e01d4b2-fe57-4b38-b4e7-4316c2edd93d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8422, name: inline-volume-tester-qccgk, uid: 660762c0-ff5a-465d-b48d-6ebadb66e77c] is deletingDependents\nI0617 04:54:01.059631 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" objectUID=0e01d4b2-fe57-4b38-b4e7-4316c2edd93d kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:54:01.059861 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8422/inline-volume-tester-qccgk\" objectUID=870e2247-0b65-4007-ba73-e2aa736d5109 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:54:01.064238 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" objectUID=0e01d4b2-fe57-4b38-b4e7-4316c2edd93d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:54:01.065956 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-8422/inline-volume-tester-qccgk\" PVC=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\"\nI0617 04:54:01.066139 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\"\nI0617 04:54:01.065906 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-tester-qccgk\" objectUID=660762c0-ff5a-465d-b48d-6ebadb66e77c kind=\"Pod\" virtual=false\nI0617 04:54:01.069726 10 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8422, name: inline-volume-tester-qccgk-my-volume-0, uid: 0e01d4b2-fe57-4b38-b4e7-4316c2edd93d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8422, name: inline-volume-tester-qccgk, uid: 660762c0-ff5a-465d-b48d-6ebadb66e77c] is deletingDependents\nI0617 04:54:01.070677 10 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" objectUID=0e01d4b2-fe57-4b38-b4e7-4316c2edd93d kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0617 04:54:01.072655 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-8422/inline-volume-tester-qccgk-my-volume-0\" objectUID=0e01d4b2-fe57-4b38-b4e7-4316c2edd93d kind=\"PersistentVolumeClaim\" virtual=false\nI0617 04:54:01.474787 10 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-102/awsc4pd2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0617 04:54:01.697974 10 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-102/awsc4pd2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:54:01.697997 10 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-102/awsc4pd2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0617 04:54:02.128172 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9204/pvc-cc6hr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9204\\\" or manually created by system administrator\"\nI0617 04:54:02.159631 10 pv_controller.go:887] volume \"pvc-99f5ac65-880a-4386-aafb-582b3f271c1a\" entered phase \"Bound\"\nI0617 04:54:02.161075 10 pv_controller.go:990] volume \"pvc-99f5ac65-880a-4386-aafb-582b3f271c1a\" bound to claim \"csi-mock-volumes-9204/pvc-cc6hr\"\nI0617 04:54:02.161550 10 pv_controller.go:1348] isVolumeReleased[pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa]: volume is released\nI0617 04:54:02.170760 10 pv_controller.go:831] claim \"csi-mock-volumes-9204/pvc-cc6hr\" entered phase \"Bound\"\nI0617 04:54:02.311142 10 pv_controller_base.go:533] deletion of claim \"ephemeral-9508/inline-volume-tester-4cjrx-my-volume-0\" was already processed\nI0617 04:54:02.475148 10 garbagecollector.go:468] \"Processing object\" object=\"prestop-7073/server\" objectUID=9c30454d-7edb-433a-bdf0-cf5b7e972860 kind=\"CiliumEndpoint\" virtual=false\nI0617 04:54:02.478860 10 garbagecollector.go:580] \"Deleting object\" object=\"prestop-7073/server\" objectUID=9c30454d-7edb-433a-bdf0-cf5b7e972860 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0617 04:54:02.564293 10 pv_controller.go:887] volume \"local-pvggfsk\" entered phase \"Available\"\nI0617 04:54:02.603729 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-99f5ac65-880a-4386-aafb-582b3f271c1a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9204^4\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:54:02.659861 10 pv_controller.go:938] claim \"persistent-local-volumes-test-5954/pvc-ssbsx\" bound to volume \"local-pvggfsk\"\nI0617 04:54:02.671637 10 pv_controller.go:887] volume \"local-pvggfsk\" entered phase \"Bound\"\nI0617 04:54:02.671676 10 pv_controller.go:990] volume \"local-pvggfsk\" bound to claim \"persistent-local-volumes-test-5954/pvc-ssbsx\"\nI0617 04:54:02.679808 10 pv_controller.go:831] claim \"persistent-local-volumes-test-5954/pvc-ssbsx\" entered phase \"Bound\"\nI0617 04:54:02.802983 10 event.go:294] \"Event occurred\" object=\"provisioning-4787-3652/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0617 04:54:02.898114 10 namespace_controller.go:185] Namespace has been deleted emptydir-2834\nI0617 04:54:02.939751 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"crd-webhook-6705/sample-crd-conversion-webhook-deployment-67c86bcf4b\" need=1 creating=1\nI0617 04:54:02.940279 10 event.go:294] \"Event occurred\" object=\"crd-webhook-6705/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-67c86bcf4b to 1\"\nI0617 04:54:02.949020 10 event.go:294] \"Event occurred\" object=\"crd-webhook-6705/sample-crd-conversion-webhook-deployment-67c86bcf4b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-67c86bcf4b-rgcnv\"\nI0617 04:54:02.951435 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-54ebc89b-eab5-4199-a298-fe5bcf99f9aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00b2aeffaa7f9be55\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:54:02.953391 10 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-6705/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:54:02.966718 10 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-6705/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:54:03.120319 10 event.go:294] \"Event occurred\" object=\"provisioning-4787/csi-hostpath2xkcs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-4787\\\" or manually created by system administrator\"\nI0617 04:54:03.120344 10 event.go:294] \"Event occurred\" object=\"provisioning-4787/csi-hostpath2xkcs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-4787\\\" or manually created by system administrator\"\nI0617 04:54:03.146020 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-99f5ac65-880a-4386-aafb-582b3f271c1a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9204^4\") from node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nI0617 04:54:03.146225 10 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9204/pvc-volume-tester-6llxg\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-99f5ac65-880a-4386-aafb-582b3f271c1a\\\" \"\nE0617 04:54:03.381490 10 tokens_controller.go:262] error synchronizing serviceaccount hostpath-8969/default: secrets \"default-token-nlp4k\" is forbidden: unable to create new content in namespace hostpath-8969 because it is being terminated\nI0617 04:54:03.754601 10 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7862\nI0617 04:54:03.802055 10 namespace_controller.go:185] Namespace has been deleted pods-416\nE0617 04:54:04.247290 10 tokens_controller.go:262] error synchronizing serviceaccount pv-protection-4707/default: secrets \"default-token-kn56r\" is forbidden: unable to create new content in namespace pv-protection-4707 because it is being terminated\nI0617 04:54:04.279808 10 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-2315/sample-webhook-deployment-6c69dbd86b\" need=1 creating=1\nI0617 04:54:04.280544 10 event.go:294] \"Event occurred\" object=\"webhook-2315/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6c69dbd86b to 1\"\nI0617 04:54:04.290160 10 event.go:294] \"Event occurred\" object=\"webhook-2315/sample-webhook-deployment-6c69dbd86b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6c69dbd86b-fpxrr\"\nI0617 04:54:04.295989 10 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-2315/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0617 04:54:04.823146 10 namespace_controller.go:185] Namespace has been deleted volumelimits-140\nI0617 04:54:05.077905 10 pv_controller.go:887] volume \"pvc-32337d42-7bcb-4aca-bdfe-53488af21026\" entered phase \"Bound\"\nI0617 04:54:05.078631 10 pv_controller.go:990] volume \"pvc-32337d42-7bcb-4aca-bdfe-53488af21026\" bound to claim \"fsgroupchangepolicy-102/awsc4pd2\"\nI0617 04:54:05.089272 10 pv_controller.go:831] claim \"fsgroupchangepolicy-102/awsc4pd2\" entered phase \"Bound\"\nE0617 04:54:05.559878 10 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-1598/default: secrets \"default-token-njxqh\" is forbidden: unable to create new content in namespace security-context-test-1598 because it is being terminated\nI0617 04:54:05.673333 10 namespace_controller.go:185] Namespace has been deleted provisioning-4388\nI0617 04:54:05.676674 10 namespace_controller.go:185] Namespace has been deleted provisioning-5267\nI0617 04:54:05.735193 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-32337d42-7bcb-4aca-bdfe-53488af21026\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03be2692189e792c1\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:54:05.828901 10 namespace_controller.go:185] Namespace has been deleted replicaset-5510\nE0617 04:54:06.179047 10 tokens_controller.go:262] error synchronizing serviceaccount events-3156/default: serviceaccounts \"default\" not found\nE0617 04:54:06.218246 10 tokens_controller.go:262] error synchronizing serviceaccount cronjob-2817/default: secrets \"default-token-lvzqb\" is forbidden: unable to create new content in namespace cronjob-2817 because it is being terminated\nI0617 04:54:07.015677 10 namespace_controller.go:185] Namespace has been deleted services-2133\nI0617 04:54:07.240792 10 namespace_controller.go:185] Namespace has been deleted container-probe-4939\nE0617 04:54:07.855324 10 pv_controller.go:1459] error finding provisioning plugin for claim volumemode-2389/pvc-skjns: storageclass.storage.k8s.io \"volumemode-2389\" not found\nI0617 04:54:07.855549 10 event.go:294] \"Event occurred\" object=\"volumemode-2389/pvc-skjns\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-2389\\\" not found\"\nI0617 04:54:07.969531 10 pv_controller.go:887] volume \"local-ks6tx\" entered phase \"Available\"\nI0617 04:54:08.252335 10 pv_controller.go:887] volume \"pvc-9d21bad5-9aeb-4de9-b8bf-2f78a1fa8090\" entered phase \"Bound\"\nI0617 04:54:08.252474 10 pv_controller.go:990] volume \"pvc-9d21bad5-9aeb-4de9-b8bf-2f78a1fa8090\" bound to claim \"provisioning-4787/csi-hostpath2xkcs\"\nI0617 04:54:08.259850 10 pv_controller.go:831] claim \"provisioning-4787/csi-hostpath2xkcs\" entered phase \"Bound\"\nI0617 04:54:08.408451 10 namespace_controller.go:185] Namespace has been deleted hostpath-8969\nW0617 04:54:08.485927 10 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0617 04:54:08.485948 10 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0617 04:54:09.076044 10 operation_generator.go:528] DetachVolume.Detach succeeded for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") on node \"ip-172-20-46-241.eu-west-1.compute.internal\" \nE0617 04:54:09.126694 10 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-9508/default: secrets \"default-token-dwz86\" is forbidden: unable to create new content in namespace ephemeral-9508 because it is being terminated\nI0617 04:54:09.156950 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:54:09.318349 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-32337d42-7bcb-4aca-bdfe-53488af21026\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03be2692189e792c1\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:54:09.318479 10 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-102/pod-e57f8d94-95a7-45bd-a332-927c95407689\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-32337d42-7bcb-4aca-bdfe-53488af21026\\\" \"\nI0617 04:54:09.341041 10 namespace_controller.go:185] Namespace has been deleted pv-protection-4707\nI0617 04:54:09.501591 10 pv_controller.go:887] volume \"local-pvjlnvm\" entered phase \"Available\"\nI0617 04:54:09.605394 10 pv_controller.go:938] claim \"persistent-local-volumes-test-4601/pvc-lskqh\" bound to volume \"local-pvjlnvm\"\nI0617 04:54:09.613655 10 pv_controller.go:887] volume \"local-pvjlnvm\" entered phase \"Bound\"\nI0617 04:54:09.613763 10 pv_controller.go:990] volume \"local-pvjlnvm\" bound to claim \"persistent-local-volumes-test-4601/pvc-lskqh\"\nI0617 04:54:09.621561 10 pv_controller.go:831] claim \"persistent-local-volumes-test-4601/pvc-lskqh\" entered phase \"Bound\"\nI0617 04:54:09.736454 10 pvc_protection_controller.go:281] \"Pod uses PVC\" pod=\"ephemeral-7937/inline-volume-tester-zp5h9\" PVC=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\"\nI0617 04:54:09.736624 10 pvc_protection_controller.go:174] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\"\nI0617 04:54:09.745588 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"ephemeral-7937/inline-volume-tester-zp5h9-my-volume-0\"\nI0617 04:54:09.751395 10 garbagecollector.go:468] \"Processing object\" object=\"ephemeral-7937/inline-volume-tester-zp5h9\" objectUID=4cc13b07-dd09-4b35-8cfe-55c8ab47e234 kind=\"Pod\" virtual=false\nI0617 04:54:09.754142 10 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7937, name: inline-volume-tester-zp5h9, uid: 4cc13b07-dd09-4b35-8cfe-55c8ab47e234]\nI0617 04:54:09.754389 10 pv_controller.go:648] volume \"pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:54:09.760346 10 pv_controller.go:887] volume \"pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d\" entered phase \"Released\"\nI0617 04:54:09.762489 10 pv_controller.go:1348] isVolumeReleased[pvc-43ced6ca-3db8-4ac1-9e63-85518483cd8d]: volume is released\nI0617 04:54:09.901090 10 event.go:294] \"Event occurred\" object=\"provisioning-7978-3675/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE0617 04:54:09.929225 10 pv_controller.go:1459] error finding provisioning plugin for claim volumemode-860/pvc-685v7: storageclass.storage.k8s.io \"volumemode-860\" not found\nI0617 04:54:09.929418 10 event.go:294] \"Event occurred\" object=\"volumemode-860/pvc-685v7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-860\\\" not found\"\nI0617 04:54:09.976481 10 reconciler.go:304] attacherDetacher.AttachVolume started for volume \"pvc-9d21bad5-9aeb-4de9-b8bf-2f78a1fa8090\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4787^85673b97-edf9-11ec-a224-964e649869e2\") from node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:54:10.039903 10 pv_controller.go:887] volume \"local-cm4kh\" entered phase \"Available\"\nI0617 04:54:10.210720 10 event.go:294] \"Event occurred\" object=\"provisioning-7978/csi-hostpathqp5f8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-7978\\\" or manually created by system administrator\"\nI0617 04:54:10.358852 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"persistent-local-volumes-test-4601/pvc-lskqh\"\nI0617 04:54:10.364768 10 pv_controller.go:648] volume \"local-pvjlnvm\" is released and reclaim policy \"Retain\" will be executed\nI0617 04:54:10.368872 10 pv_controller.go:887] volume \"local-pvjlnvm\" entered phase \"Released\"\nI0617 04:54:10.470655 10 pv_controller_base.go:533] deletion of claim \"persistent-local-volumes-test-4601/pvc-lskqh\" was already processed\nI0617 04:54:10.492421 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-9d21bad5-9aeb-4de9-b8bf-2f78a1fa8090\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4787^85673b97-edf9-11ec-a224-964e649869e2\") from node \"ip-172-20-38-101.eu-west-1.compute.internal\" \nI0617 04:54:10.492628 10 event.go:294] \"Event occurred\" object=\"provisioning-4787/pod-subpath-test-dynamicpv-qb5z\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-9d21bad5-9aeb-4de9-b8bf-2f78a1fa8090\\\" \"\nI0617 04:54:10.584107 10 namespace_controller.go:185] Namespace has been deleted security-context-test-1598\nI0617 04:54:11.151104 10 namespace_controller.go:185] Namespace has been deleted downward-api-1692\nI0617 04:54:11.200634 10 namespace_controller.go:185] Namespace has been deleted events-3156\nI0617 04:54:11.390806 10 operation_generator.go:413] AttachVolume.Attach succeeded for volume \"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f88927fd231c5577\") from node \"ip-172-20-39-216.eu-west-1.compute.internal\" \nI0617 04:54:11.390944 10 event.go:294] \"Event occurred\" object=\"volume-9212/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3d0b7103-bb98-487f-a896-d85fda753f00\\\" \"\nE0617 04:54:11.393419 10 namespace_controller.go:162] deletion of namespace job-3451 failed: unexpected items still remain in namespace: job-3451 for gvr: /v1, Resource=pods\nI0617 04:54:11.525087 10 pvc_protection_controller.go:269] \"PVC is unused\" PVC=\"csi-mock-volumes-9204/pvc-cc6hr\"\nI0617 04:54:11.534598 10 pv_controller.go:648] volume \"pvc-99f5ac65-880a-4386-aafb-582b3f271c1a\" is released and reclaim policy \"Delete\" will be executed\nI0617 04:54:11.538557 10 pv_controller.go:887] volume \"pvc-99f5ac65-880a-4386-aafb-582b3f271c1a\" entered phase \"Released\"\nI0617 04:54: