Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 1h45m |
Revision | master |
... skipping 209 lines ... + CHANNELS=/tmp/channels.KkCOKU7oE + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --down --kops-binary-path=/tmp/kops.TcEmRlCxJ I0621 20:08:21.384343 6119 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0621 20:08:21.385179 6119 app.go:61] RunDir for this run: "/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791" I0621 20:08:21.460080 6119 app.go:120] ID for this run: "aab96967-f19d-11ec-8dfe-daa417708791" I0621 20:08:21.473272 6119 dumplogs.go:45] /tmp/kops.TcEmRlCxJ toolbox dump --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu W0621 20:08:21.975713 6119 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0621 20:08:21.975757 6119 down.go:48] /tmp/kops.TcEmRlCxJ delete cluster --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --yes I0621 20:08:21.994955 6138 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0621 20:08:21.995049 6138 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0621 20:08:21.995054 6138 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" not found Error: exit status 1 + echo 'kubetest2 down failed' kubetest2 down failed + [[ v == \v ]] + KOPS_BASE_URL= ++ kops-download-release v1.23.2 ++ local kops +++ mktemp -t kops.XXXXXXXXX ++ kops=/tmp/kops.0x62OfvRz ... skipping 7 lines ... + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --up --kops-binary-path=/tmp/kops.0x62OfvRz --kubernetes-version=v1.23.1 --control-plane-size=1 --template-path=tests/e2e/templates/many-addons.yaml.tmpl '--create-args=--networking calico' I0621 20:08:23.970136 6173 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0621 20:08:23.970944 6173 app.go:61] RunDir for this run: "/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791" I0621 20:08:24.020993 6173 app.go:120] ID for this run: "aab96967-f19d-11ec-8dfe-daa417708791" I0621 20:08:24.021302 6173 up.go:44] Cleaning up any leaked resources from previous cluster I0621 20:08:24.021394 6173 dumplogs.go:45] /tmp/kops.0x62OfvRz toolbox dump --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu W0621 20:08:24.496948 6173 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0621 20:08:24.496994 6173 down.go:48] /tmp/kops.0x62OfvRz delete cluster --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --yes I0621 20:08:24.516416 6196 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0621 20:08:24.516504 6196 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0621 20:08:24.516509 6196 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" not found I0621 20:08:24.965568 6173 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2022/06/21 20:08:24 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I0621 20:08:24.974746 6173 http.go:37] curl https://ip.jsb.workers.dev I0621 20:08:25.094019 6173 template.go:58] /tmp/kops.0x62OfvRz toolbox template --template tests/e2e/templates/many-addons.yaml.tmpl --output /tmp/kops-template565964388/manifest.yaml --values /tmp/kops-template565964388/values.yaml --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io I0621 20:08:25.114065 6204 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0621 20:08:25.114150 6204 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0621 20:08:25.114155 6204 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0621 20:08:25.247783 6173 create.go:33] /tmp/kops.0x62OfvRz create --filename /tmp/kops-template565964388/manifest.yaml --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io ... skipping 56 lines ... I0621 20:09:03.474553 6173 up.go:243] /tmp/kops.0x62OfvRz validate cluster --name e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --count 10 --wait 15m0s I0621 20:09:03.493797 6240 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true I0621 20:09:03.493886 6240 featureflag.go:162] FeatureFlag "AlphaAllowGCE"=true I0621 20:09:03.493891 6240 featureflag.go:162] FeatureFlag "SpecOverrideFlag"=true Validating cluster e2e-143745cea3-c83fe.test-cncf-aws.k8s.io W0621 20:09:04.671235 6240 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:09:14.705667 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:09:24.746289 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:09:34.796430 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:09:44.838550 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:09:54.872841 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:10:04.910136 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:10:14.947866 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:10:24.983839 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:10:35.037489 6240 validate_cluster.go:232] (will retry): cluster not yet healthy W0621 20:10:45.073368 6240 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:10:55.107936 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:11:05.154022 6240 validate_cluster.go:232] (will retry): cluster not yet healthy W0621 20:11:15.190288 6240 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:11:25.232575 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:11:35.267275 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:11:45.309817 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:11:55.359299 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:12:05.397719 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:12:15.431975 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:12:25.466227 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:12:35.501183 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:12:45.537383 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0621 20:12:55.575414 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 32 lines ... Pod kube-system/ebs-csi-node-w695c system-node-critical pod "ebs-csi-node-w695c" is pending Pod kube-system/metrics-server-655dc594b4-nftpb system-cluster-critical pod "metrics-server-655dc594b4-nftpb" is pending Pod kube-system/metrics-server-655dc594b4-xdfc9 system-cluster-critical pod "metrics-server-655dc594b4-xdfc9" is pending Pod kube-system/node-local-dns-8rjxs system-node-critical pod "node-local-dns-8rjxs" is pending Pod kube-system/node-local-dns-hlntv system-node-critical pod "node-local-dns-hlntv" is pending Validation Failed W0621 20:13:08.230694 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 29 lines ... Pod kube-system/ebs-csi-node-p7lfq system-node-critical pod "ebs-csi-node-p7lfq" is pending Pod kube-system/ebs-csi-node-w695c system-node-critical pod "ebs-csi-node-w695c" is pending Pod kube-system/metrics-server-655dc594b4-nftpb system-cluster-critical pod "metrics-server-655dc594b4-nftpb" is pending Pod kube-system/metrics-server-655dc594b4-xdfc9 system-cluster-critical pod "metrics-server-655dc594b4-xdfc9" is pending Pod kube-system/node-local-dns-hlntv system-node-critical pod "node-local-dns-hlntv" is pending Validation Failed W0621 20:13:20.158059 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 21 lines ... Pod kube-system/ebs-csi-node-h7zhr system-node-critical pod "ebs-csi-node-h7zhr" is pending Pod kube-system/ebs-csi-node-p7lfq system-node-critical pod "ebs-csi-node-p7lfq" is pending Pod kube-system/ebs-csi-node-w695c system-node-critical pod "ebs-csi-node-w695c" is pending Pod kube-system/metrics-server-655dc594b4-nftpb system-cluster-critical pod "metrics-server-655dc594b4-nftpb" is pending Pod kube-system/metrics-server-655dc594b4-xdfc9 system-cluster-critical pod "metrics-server-655dc594b4-xdfc9" is pending Validation Failed W0621 20:13:32.134378 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 20 lines ... Pod kube-system/ebs-csi-node-h7zhr system-node-critical pod "ebs-csi-node-h7zhr" is pending Pod kube-system/ebs-csi-node-p7lfq system-node-critical pod "ebs-csi-node-p7lfq" is pending Pod kube-system/ebs-csi-node-w695c system-node-critical pod "ebs-csi-node-w695c" is pending Pod kube-system/metrics-server-655dc594b4-nftpb system-cluster-critical pod "metrics-server-655dc594b4-nftpb" is not ready (metrics-server) Pod kube-system/metrics-server-655dc594b4-xdfc9 system-cluster-critical pod "metrics-server-655dc594b4-xdfc9" is not ready (metrics-server) Validation Failed W0621 20:13:43.933814 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 11 lines ... Pod kube-system/cert-manager-webhook-6d4d986bbd-zdx7b system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-zdx7b" is not ready (cert-manager) Pod kube-system/ebs-csi-controller-774fbb7f45-2ll2c system-cluster-critical pod "ebs-csi-controller-774fbb7f45-2ll2c" is pending Pod kube-system/ebs-csi-node-c6sqq system-node-critical pod "ebs-csi-node-c6sqq" is pending Pod kube-system/metrics-server-655dc594b4-nftpb system-cluster-critical pod "metrics-server-655dc594b4-nftpb" is not ready (metrics-server) Pod kube-system/metrics-server-655dc594b4-xdfc9 system-cluster-critical pod "metrics-server-655dc594b4-xdfc9" is not ready (metrics-server) Validation Failed W0621 20:13:55.831596 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-74bc54c5b-gbkxf system-cluster-critical pod "aws-load-balancer-controller-74bc54c5b-gbkxf" is pending Pod kube-system/ebs-csi-controller-774fbb7f45-2ll2c system-cluster-critical pod "ebs-csi-controller-774fbb7f45-2ll2c" is pending Validation Failed W0621 20:14:07.752443 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 6 lines ... ip-172-20-0-88.eu-west-2.compute.internal master True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-74bc54c5b-gbkxf system-cluster-critical pod "aws-load-balancer-controller-74bc54c5b-gbkxf" is pending Validation Failed W0621 20:14:19.577158 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 6 lines ... ip-172-20-0-88.eu-west-2.compute.internal master True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/aws-load-balancer-controller-74bc54c5b-gbkxf system-cluster-critical pod "aws-load-balancer-controller-74bc54c5b-gbkxf" is pending Validation Failed W0621 20:14:31.452358 6240 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c5.large 1 1 eu-west-2a nodes-eu-west-2a Node t3.medium 4 4 eu-west-2a ... skipping 535 lines ... evicting pod kube-system/aws-node-termination-handler-7888b4dfdc-dtlj8 I0621 20:18:17.347435 6356 request.go:665] Waited for 1.099196312s due to client-side throttling, not priority and fairness, request: GET:https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/kube-system/pods/ebs-csi-controller-774fbb7f45-2ll2c I0621 20:18:45.365800 6356 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0621 20:18:50.367471 6356 instancegroups.go:588] Stopping instance "i-0b74b7f776fccea82", node "ip-172-20-0-88.eu-west-2.compute.internal", in group "master-eu-west-2a.masters.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0621 20:18:50.664779 6356 instancegroups.go:434] waiting for 15s after terminating instance I0621 20:19:05.667517 6356 instancegroups.go:467] Validating the cluster. I0621 20:19:05.850068 6356 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.134.252.100:443: connect: connection refused. I0621 20:20:05.894039 6356 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.134.252.100:443: i/o timeout. I0621 20:21:05.932196 6356 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.134.252.100:443: i/o timeout. I0621 20:22:05.985499 6356 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.134.252.100:443: i/o timeout. I0621 20:23:06.036406 6356 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.134.252.100:443: i/o timeout. I0621 20:24:06.076294 6356 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.134.252.100:443: i/o timeout. I0621 20:25:06.123513 6356 instancegroups.go:513] Cluster did not validate, will retry in "30s": error listing nodes: Get "https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 18.134.252.100:443: i/o timeout. I0621 20:25:38.753436 6356 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "aws-load-balancer-controller-74bc54c5b-j7twl" is pending, system-cluster-critical pod "cert-manager-699d66b4b-5fmtw" is pending, system-cluster-critical pod "cert-manager-cainjector-6465ccdb69-rldtk" is pending, system-cluster-critical pod "cert-manager-webhook-6d4d986bbd-vnvk8" is pending, system-cluster-critical pod "cluster-autoscaler-6db4d794b9-dqm4n" is pending, system-cluster-critical pod "ebs-csi-controller-774fbb7f45-zftgr" is pending, system-node-critical pod "ebs-csi-node-dhg67" is pending. I0621 20:26:10.824177 6356 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-cluster-critical pod "ebs-csi-controller-774fbb7f45-zftgr" is pending. I0621 20:26:42.736849 6356 instancegroups.go:503] Cluster validated; revalidating in 10s to make sure it does not flap. I0621 20:26:54.731635 6356 instancegroups.go:500] Cluster validated. I0621 20:26:54.731682 6356 instancegroups.go:467] Validating the cluster. I0621 20:26:56.215888 6356 instancegroups.go:500] Cluster validated. ... skipping 35 lines ... evicting pod kube-system/hubble-relay-55846f56fb-gqbx4 WARNING: ignoring DaemonSet-managed Pods: kube-system/cilium-4z49x, kube-system/ebs-csi-node-58d2g, kube-system/node-local-dns-blkbn evicting pod kube-system/metrics-server-655dc594b4-nftpb evicting pod kube-system/coredns-7884856795-s7q24 evicting pod kube-system/coredns-autoscaler-57dd87df6c-85jsq I0621 20:34:20.807041 6356 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. error when evicting pods/"coredns-7884856795-s7q24" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0621 20:34:22.094028 6356 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0621 20:34:25.808024 6356 instancegroups.go:588] Stopping instance "i-0b6782026e6ccbe61", node "ip-172-20-0-227.eu-west-2.compute.internal", in group "nodes-eu-west-2a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). evicting pod kube-system/coredns-7884856795-s7q24 evicting pod kube-system/metrics-server-655dc594b4-nftpb error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0621 20:34:26.112996 6356 instancegroups.go:434] waiting for 15s after terminating instance I0621 20:34:26.717085 6356 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0621 20:34:27.094795 6356 instancegroups.go:588] Stopping instance "i-08aeb450f35d69ea8", node "ip-172-20-0-29.eu-west-2.compute.internal", in group "nodes-eu-west-2a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0621 20:34:27.355848 6356 instancegroups.go:434] waiting for 15s after terminating instance evicting pod kube-system/metrics-server-655dc594b4-nftpb error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0621 20:34:31.718194 6356 instancegroups.go:588] Stopping instance "i-017166f9933f1e4cb", node "ip-172-20-0-208.eu-west-2.compute.internal", in group "nodes-eu-west-2a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0621 20:34:31.964042 6356 instancegroups.go:434] waiting for 15s after terminating instance evicting pod kube-system/metrics-server-655dc594b4-nftpb error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0621 20:34:41.114576 6356 instancegroups.go:467] Validating the cluster. evicting pod kube-system/metrics-server-655dc594b4-nftpb error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I0621 20:34:42.938905 6356 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-5xnjw" is pending, system-node-critical pod "cilium-vbntv" is pending, system-node-critical pod "ebs-csi-node-7jfds" is pending, system-node-critical pod "ebs-csi-node-gn8lf" is pending, system-node-critical pod "ebs-csi-node-ncrwh" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-208.eu-west-2.compute.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-227.eu-west-2.compute.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-29.eu-west-2.compute.internal" is not ready (kube-proxy), system-cluster-critical pod "metrics-server-655dc594b4-b7cfd" is not ready (metrics-server), system-node-critical pod "node-local-dns-5t8tf" is pending, system-node-critical pod "node-local-dns-fzwk9" is pending. evicting pod kube-system/metrics-server-655dc594b4-nftpb error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-nftpb error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-nftpb error when evicting pods/"metrics-server-655dc594b4-nftpb" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. evicting pod kube-system/metrics-server-655dc594b4-nftpb I0621 20:35:07.860795 6356 instancegroups.go:653] Waiting for 5s for pods to stabilize after draining. I0621 20:35:12.861136 6356 instancegroups.go:588] Stopping instance "i-0f585075fcaf6c5ef", node "ip-172-20-0-87.eu-west-2.compute.internal", in group "nodes-eu-west-2a.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io" (this may take a while). I0621 20:35:13.130176 6356 instancegroups.go:434] waiting for 15s after terminating instance I0621 20:35:14.831812 6356 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-5xnjw" is pending, system-node-critical pod "cilium-7xdrj" is pending, system-node-critical pod "cilium-ftr94" is pending, system-node-critical pod "cilium-vbntv" is pending, system-node-critical pod "ebs-csi-node-7jfds" is pending, system-node-critical pod "ebs-csi-node-gn8lf" is pending, system-node-critical pod "ebs-csi-node-ncrwh" is pending, system-node-critical pod "ebs-csi-node-nmq6x" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-208.eu-west-2.compute.internal" is not ready (kube-proxy), system-cluster-critical pod "metrics-server-655dc594b4-txfcm" is not ready (metrics-server), system-node-critical pod "node-local-dns-5t8tf" is pending, system-node-critical pod "node-local-dns-fzwk9" is pending, system-node-critical pod "node-local-dns-l4hg9" is pending. I0621 20:35:46.763803 6356 instancegroups.go:523] Cluster did not pass validation, will retry in "30s": system-node-critical pod "cilium-5xnjw" is pending, system-node-critical pod "cilium-7xdrj" is pending, system-node-critical pod "cilium-vbntv" is pending, system-node-critical pod "cilium-wp6tn" is pending, system-node-critical pod "ebs-csi-node-7jfds" is pending, system-node-critical pod "ebs-csi-node-gn8lf" is pending, system-node-critical pod "ebs-csi-node-ncrwh" is pending, system-node-critical pod "ebs-csi-node-rvcfm" is pending, system-node-critical pod "kube-proxy-ip-172-20-0-208.eu-west-2.compute.internal" is not ready (kube-proxy), system-node-critical pod "kube-proxy-ip-172-20-0-87.eu-west-2.compute.internal" is not ready (kube-proxy), system-node-critical pod "node-local-dns-5t8tf" is pending, system-node-critical pod "node-local-dns-fzwk9" is pending, system-node-critical pod "node-local-dns-l4hg9" is pending, system-node-critical pod "node-local-dns-vvhff" is pending. ... skipping 539 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: blockfs] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 488 lines ... [AfterEach] [sig-api-machinery] client-go should negotiate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:17.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":1,"skipped":18,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:17.421: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 60 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:19.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-4442" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:20.058: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping ... skipping 37 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:20.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-9292" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":2,"skipped":27,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":4,"failed":0} [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:38:19.627: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename disruption [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 19 lines ... [32m• [SLOW TEST:5.988 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should update/patch PodDisruptionBudget status [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:25.617: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 40 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m when running a container with a new image [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266[0m should not be able to pull image from invalid registry [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":11,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... Jun 21 20:38:17.375: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap configmap-4463/configmap-test-6a843e75-d8ec-478f-ba4d-fb313ad6815f [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:38:17.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9" in namespace "configmap-4463" to be "Succeeded or Failed" Jun 21 20:38:17.868: INFO: Pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 98.23704ms Jun 21 20:38:19.978: INFO: Pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207869459s Jun 21 20:38:22.077: INFO: Pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306575289s Jun 21 20:38:24.174: INFO: Pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404358572s Jun 21 20:38:26.285: INFO: Pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515184148s Jun 21 20:38:28.389: INFO: Pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.618819134s [1mSTEP[0m: Saw pod success Jun 21 20:38:28.389: INFO: Pod "pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9" satisfied condition "Succeeded or Failed" Jun 21 20:38:28.487: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9 container env-test: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:28.693: INFO: Waiting for pod pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9 to disappear Jun 21 20:38:28.790: INFO: Pod pod-configmaps-f1a1b924-7a08-4b74-95b1-c2242313b9a9 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:12.125 seconds][0m [sig-node] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be consumable via the environment [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... W0621 20:38:17.723231 7210 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 21 20:38:17.723: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Jun 21 20:38:18.061: INFO: Waiting up to 5m0s for pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a" in namespace "emptydir-3967" to be "Succeeded or Failed" Jun 21 20:38:18.185: INFO: Pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 124.015274ms Jun 21 20:38:20.283: INFO: Pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221860614s Jun 21 20:38:22.380: INFO: Pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319813125s Jun 21 20:38:24.479: INFO: Pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418401435s Jun 21 20:38:26.580: INFO: Pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51958516s Jun 21 20:38:28.678: INFO: Pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.6177178s [1mSTEP[0m: Saw pod success Jun 21 20:38:28.678: INFO: Pod "pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a" satisfied condition "Succeeded or Failed" Jun 21 20:38:28.776: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:28.979: INFO: Waiting for pod pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a to disappear Jun 21 20:38:29.082: INFO: Pod pod-f62c9096-0f59-41e7-9858-cbe437ac8a4a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:12.397 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:29.379: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 94 lines ... [32m• [SLOW TEST:13.408 seconds][0m [sig-apps] ReplicaSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should adopt matching pods on creation and release no longer matching pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:30.435: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:30.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-2112" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:30.794: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 85 lines ... [32m• [SLOW TEST:12.536 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should delete pods created by rc when not orphaning [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:32.606: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 72 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl copy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1368[0m should copy a file from a running Pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1385[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:32.726: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping ... skipping 143 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":13,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:33.550: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping ... skipping 68 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-224edf11-384d-4ac1-a8df-8a81f04ee375 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:38:26.606: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533" in namespace "projected-3408" to be "Succeeded or Failed" Jun 21 20:38:26.707: INFO: Pod "pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533": Phase="Pending", Reason="", readiness=false. Elapsed: 100.948358ms Jun 21 20:38:28.805: INFO: Pod "pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198083291s Jun 21 20:38:30.904: INFO: Pod "pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297780078s Jun 21 20:38:33.002: INFO: Pod "pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395654767s Jun 21 20:38:35.157: INFO: Pod "pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.550568476s [1mSTEP[0m: Saw pod success Jun 21 20:38:35.157: INFO: Pod "pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533" satisfied condition "Succeeded or Failed" Jun 21 20:38:35.262: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:35.522: INFO: Waiting for pod pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533 to disappear Jun 21 20:38:35.620: INFO: Pod pod-projected-configmaps-edd55642-151b-431d-b265-ed234eb24533 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.932 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":12,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:36.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-4188" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":2,"skipped":15,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes ... skipping 114 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when create a pod with lifecycle hook [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44[0m should execute poststart http hook properly [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:11.012 seconds][0m [sig-apps] StatefulSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m MinReadySeconds should be honored when enabled [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1150[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet MinReadySeconds should be honored when enabled","total":-1,"completed":3,"skipped":15,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:41.819: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 83 lines ... Jun 21 20:38:28.148: INFO: PersistentVolumeClaim pvc-9b442 found but phase is Pending instead of Bound. Jun 21 20:38:30.244: INFO: PersistentVolumeClaim pvc-9b442 found and phase=Bound (2.231427508s) Jun 21 20:38:30.244: INFO: Waiting up to 3m0s for PersistentVolume local-rrz7t to have phase Bound Jun 21 20:38:30.341: INFO: PersistentVolume local-rrz7t found and phase=Bound (96.425436ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4wbt [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:38:30.636: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4wbt" in namespace "provisioning-5778" to be "Succeeded or Failed" Jun 21 20:38:30.740: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt": Phase="Pending", Reason="", readiness=false. Elapsed: 103.837259ms Jun 21 20:38:32.838: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202082262s Jun 21 20:38:35.004: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.367902717s [1mSTEP[0m: Saw pod success Jun 21 20:38:35.004: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt" satisfied condition "Succeeded or Failed" Jun 21 20:38:35.158: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-4wbt container test-container-subpath-preprovisionedpv-4wbt: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:35.381: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4wbt to disappear Jun 21 20:38:35.493: INFO: Pod pod-subpath-test-preprovisionedpv-4wbt no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4wbt Jun 21 20:38:35.493: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4wbt" in namespace "provisioning-5778" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4wbt [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:38:35.699: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4wbt" in namespace "provisioning-5778" to be "Succeeded or Failed" Jun 21 20:38:35.798: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt": Phase="Pending", Reason="", readiness=false. Elapsed: 99.375165ms Jun 21 20:38:37.898: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198836615s Jun 21 20:38:40.009: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309805755s [1mSTEP[0m: Saw pod success Jun 21 20:38:40.009: INFO: Pod "pod-subpath-test-preprovisionedpv-4wbt" satisfied condition "Succeeded or Failed" Jun 21 20:38:40.108: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-4wbt container test-container-subpath-preprovisionedpv-4wbt: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:40.370: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4wbt to disappear Jun 21 20:38:40.489: INFO: Pod pod-subpath-test-preprovisionedpv-4wbt no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4wbt Jun 21 20:38:40.489: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4wbt" in namespace "provisioning-5778" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":12,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:41.916: INFO: Only supported for providers [gce gke] (not aws) ... skipping 65 lines ... [32m• [SLOW TEST:31.779 seconds][0m [sig-node] PreStop [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m graceful pod terminated should wait until preStop hook completes the process [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:48.746: INFO: Only supported for providers [azure] (not aws) ... skipping 81 lines ... [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:38:38.828: INFO: >>> kubeConfig: /root/.kube/config ... skipping 2 lines ... [It] should support existing single file [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219 Jun 21 20:38:39.334: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Jun 21 20:38:39.334: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-z9qv [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:38:39.440: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-z9qv" in namespace "provisioning-311" to be "Succeeded or Failed" Jun 21 20:38:39.545: INFO: Pod "pod-subpath-test-inlinevolume-z9qv": Phase="Pending", Reason="", readiness=false. Elapsed: 105.044379ms Jun 21 20:38:41.645: INFO: Pod "pod-subpath-test-inlinevolume-z9qv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204805854s Jun 21 20:38:43.756: INFO: Pod "pod-subpath-test-inlinevolume-z9qv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316070927s Jun 21 20:38:45.856: INFO: Pod "pod-subpath-test-inlinevolume-z9qv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416574711s Jun 21 20:38:47.954: INFO: Pod "pod-subpath-test-inlinevolume-z9qv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.514730987s [1mSTEP[0m: Saw pod success Jun 21 20:38:47.955: INFO: Pod "pod-subpath-test-inlinevolume-z9qv" satisfied condition "Succeeded or Failed" Jun 21 20:38:48.055: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-z9qv container test-container-subpath-inlinevolume-z9qv: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:48.274: INFO: Waiting for pod pod-subpath-test-inlinevolume-z9qv to disappear Jun 21 20:38:48.372: INFO: Pod pod-subpath-test-inlinevolume-z9qv no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-z9qv Jun 21 20:38:48.372: INFO: Deleting pod "pod-subpath-test-inlinevolume-z9qv" in namespace "provisioning-311" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":5,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:49.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-5003" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]","total":-1,"completed":2,"skipped":17,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:49.875: INFO: Only supported for providers [vsphere] (not aws) ... skipping 56 lines ... Jun 21 20:38:32.751: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194 Jun 21 20:38:33.237: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 21 20:38:33.439: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6091" in namespace "provisioning-6091" to be "Succeeded or Failed" Jun 21 20:38:33.535: INFO: Pod "hostpath-symlink-prep-provisioning-6091": Phase="Pending", Reason="", readiness=false. Elapsed: 96.144183ms Jun 21 20:38:35.635: INFO: Pod "hostpath-symlink-prep-provisioning-6091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196756064s Jun 21 20:38:37.733: INFO: Pod "hostpath-symlink-prep-provisioning-6091": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294425723s Jun 21 20:38:39.833: INFO: Pod "hostpath-symlink-prep-provisioning-6091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.394663148s [1mSTEP[0m: Saw pod success Jun 21 20:38:39.833: INFO: Pod "hostpath-symlink-prep-provisioning-6091" satisfied condition "Succeeded or Failed" Jun 21 20:38:39.833: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6091" in namespace "provisioning-6091" Jun 21 20:38:39.944: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6091" to be fully deleted Jun 21 20:38:40.042: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-6ld9 [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:38:40.152: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-6ld9" in namespace "provisioning-6091" to be "Succeeded or Failed" Jun 21 20:38:40.265: INFO: Pod "pod-subpath-test-inlinevolume-6ld9": Phase="Pending", Reason="", readiness=false. Elapsed: 113.133987ms Jun 21 20:38:42.362: INFO: Pod "pod-subpath-test-inlinevolume-6ld9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209713342s Jun 21 20:38:44.459: INFO: Pod "pod-subpath-test-inlinevolume-6ld9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307355824s Jun 21 20:38:46.556: INFO: Pod "pod-subpath-test-inlinevolume-6ld9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.404110422s [1mSTEP[0m: Saw pod success Jun 21 20:38:46.556: INFO: Pod "pod-subpath-test-inlinevolume-6ld9" satisfied condition "Succeeded or Failed" Jun 21 20:38:46.653: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-6ld9 container test-container-volume-inlinevolume-6ld9: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:47.182: INFO: Waiting for pod pod-subpath-test-inlinevolume-6ld9 to disappear Jun 21 20:38:47.279: INFO: Pod pod-subpath-test-inlinevolume-6ld9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-6ld9 Jun 21 20:38:47.279: INFO: Deleting pod "pod-subpath-test-inlinevolume-6ld9" in namespace "provisioning-6091" [1mSTEP[0m: Deleting pod Jun 21 20:38:47.381: INFO: Deleting pod "pod-subpath-test-inlinevolume-6ld9" in namespace "provisioning-6091" Jun 21 20:38:47.589: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6091" in namespace "provisioning-6091" to be "Succeeded or Failed" Jun 21 20:38:47.686: INFO: Pod "hostpath-symlink-prep-provisioning-6091": Phase="Pending", Reason="", readiness=false. Elapsed: 96.318697ms Jun 21 20:38:49.784: INFO: Pod "hostpath-symlink-prep-provisioning-6091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195044384s Jun 21 20:38:51.884: INFO: Pod "hostpath-symlink-prep-provisioning-6091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294832412s [1mSTEP[0m: Saw pod success Jun 21 20:38:51.884: INFO: Pod "hostpath-symlink-prep-provisioning-6091" satisfied condition "Succeeded or Failed" Jun 21 20:38:51.884: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6091" in namespace "provisioning-6091" Jun 21 20:38:52.004: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6091" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:52.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-6091" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":21,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:52.318: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 61 lines ... [32m• [SLOW TEST:22.556 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to change the type from ExternalName to NodePort [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 19 lines ... Jun 21 20:38:27.049: INFO: PersistentVolumeClaim pvc-trsvt found but phase is Pending instead of Bound. Jun 21 20:38:29.147: INFO: PersistentVolumeClaim pvc-trsvt found and phase=Bound (2.200134999s) Jun 21 20:38:29.147: INFO: Waiting up to 3m0s for PersistentVolume local-p9czf to have phase Bound Jun 21 20:38:29.252: INFO: PersistentVolume local-p9czf found and phase=Bound (105.000888ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-glvj [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:38:29.547: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-glvj" in namespace "provisioning-3909" to be "Succeeded or Failed" Jun 21 20:38:29.651: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 104.365983ms Jun 21 20:38:31.750: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203573668s Jun 21 20:38:33.854: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307400611s Jun 21 20:38:35.966: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.419473037s Jun 21 20:38:38.079: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531935515s Jun 21 20:38:40.178: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.631498365s Jun 21 20:38:42.284: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.73696847s [1mSTEP[0m: Saw pod success Jun 21 20:38:42.284: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj" satisfied condition "Succeeded or Failed" Jun 21 20:38:42.382: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-glvj container test-container-subpath-preprovisionedpv-glvj: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:42.603: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-glvj to disappear Jun 21 20:38:42.701: INFO: Pod pod-subpath-test-preprovisionedpv-glvj no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-glvj Jun 21 20:38:42.701: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-glvj" in namespace "provisioning-3909" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-glvj [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:38:42.898: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-glvj" in namespace "provisioning-3909" to be "Succeeded or Failed" Jun 21 20:38:43.001: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 103.078086ms Jun 21 20:38:45.213: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314435236s Jun 21 20:38:47.312: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41346188s Jun 21 20:38:49.412: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513185502s Jun 21 20:38:51.573: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.674478232s [1mSTEP[0m: Saw pod success Jun 21 20:38:51.573: INFO: Pod "pod-subpath-test-preprovisionedpv-glvj" satisfied condition "Succeeded or Failed" Jun 21 20:38:51.695: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-glvj container test-container-subpath-preprovisionedpv-glvj: <nil> [1mSTEP[0m: delete the pod Jun 21 20:38:51.929: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-glvj to disappear Jun 21 20:38:52.027: INFO: Pod pod-subpath-test-preprovisionedpv-glvj no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-glvj Jun 21 20:38:52.027: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-glvj" in namespace "provisioning-3909" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":18,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:53.754: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: block] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 172 lines ... [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:38:35.829: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename job [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a job [1mSTEP[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:54.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-4409" for this suite. [32m• [SLOW TEST:18.898 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:38:54.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-7421" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:55.054: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 139 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":24,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:38:58.281: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 39 lines ... [32m• [SLOW TEST:43.337 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of different groups [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode ... skipping 68 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":34,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:01.326: INFO: Only supported for providers [vsphere] (not aws) ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: vsphere] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [vsphere] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438 [90m------------------------------[0m ... skipping 107 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 21 20:38:53.716: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49" in namespace "security-context-test-8524" to be "Succeeded or Failed" Jun 21 20:38:53.832: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 115.912749ms Jun 21 20:38:55.934: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217581438s Jun 21 20:38:58.035: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319154485s Jun 21 20:39:00.133: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417090681s Jun 21 20:39:02.231: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514991603s Jun 21 20:39:04.328: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.612312978s Jun 21 20:39:06.436: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Pending", Reason="", readiness=false. Elapsed: 12.719654262s Jun 21 20:39:08.533: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.817264823s Jun 21 20:39:08.533: INFO: Pod "busybox-user-65534-c22d0d6b-b230-45b1-ab79-d8ef33fb2b49" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:39:08.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-8524" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a container with runAsUser [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50[0m should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:08.748: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 53 lines ... [1mSTEP[0m: Destroying namespace "apply-4799" for this suite. [AfterEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":4,"skipped":30,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:38:52.326: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-1c1da7b8-80e6-4e2d-9c0e-f3ad05f12c35 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:38:53.184: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d" in namespace "projected-7228" to be "Succeeded or Failed" Jun 21 20:38:53.322: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 137.935578ms Jun 21 20:38:55.419: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23476197s Jun 21 20:38:57.529: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345119891s Jun 21 20:38:59.628: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443747969s Jun 21 20:39:01.726: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542102114s Jun 21 20:39:03.834: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649148215s Jun 21 20:39:05.939: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.754863675s Jun 21 20:39:08.037: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.852278384s Jun 21 20:39:10.134: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.949753166s [1mSTEP[0m: Saw pod success Jun 21 20:39:10.134: INFO: Pod "pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d" satisfied condition "Succeeded or Failed" Jun 21 20:39:10.232: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:10.440: INFO: Waiting for pod pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d to disappear Jun 21 20:39:10.536: INFO: Pod pod-projected-configmaps-808b0607-0c86-478b-9d16-886870bb279d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:18.408 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:10.736: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 75 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl client-side validation [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1005[0m should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1050[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":4,"skipped":16,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:11.310: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 76 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-map-c8e9f61f-4ee6-4647-8427-ea912b5fc675 [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 20:38:58.981: INFO: Waiting up to 5m0s for pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102" in namespace "secrets-656" to be "Succeeded or Failed" Jun 21 20:38:59.079: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102": Phase="Pending", Reason="", readiness=false. Elapsed: 97.827323ms Jun 21 20:39:01.177: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19562764s Jun 21 20:39:03.274: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29347925s Jun 21 20:39:05.374: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393340869s Jun 21 20:39:07.473: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492483067s Jun 21 20:39:09.574: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102": Phase="Pending", Reason="", readiness=false. Elapsed: 10.593465905s Jun 21 20:39:11.679: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.698523283s [1mSTEP[0m: Saw pod success Jun 21 20:39:11.680: INFO: Pod "pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102" satisfied condition "Succeeded or Failed" Jun 21 20:39:11.777: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:11.985: INFO: Waiting for pod pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102 to disappear Jun 21 20:39:12.083: INFO: Pod pod-secrets-d1df3fa2-9e1a-4005-abf5-ec3e7da9b102 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.028 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 52 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:39:12.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-1027" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:38:54.120: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename container-runtime [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 53 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:39:14.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-3820" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 18 lines ... Jun 21 20:38:56.825: INFO: PersistentVolumeClaim pvc-lfdvg found but phase is Pending instead of Bound. Jun 21 20:38:58.923: INFO: PersistentVolumeClaim pvc-lfdvg found and phase=Bound (4.307760761s) Jun 21 20:38:58.923: INFO: Waiting up to 3m0s for PersistentVolume local-jzmxl to have phase Bound Jun 21 20:38:59.024: INFO: PersistentVolume local-jzmxl found and phase=Bound (100.530036ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-brz5 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 20:38:59.358: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-brz5" in namespace "volume-8304" to be "Succeeded or Failed" Jun 21 20:38:59.467: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 108.7034ms Jun 21 20:39:01.565: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206805116s Jun 21 20:39:03.665: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30718146s Jun 21 20:39:05.818: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46034711s Jun 21 20:39:07.920: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562406963s Jun 21 20:39:10.025: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.667292879s Jun 21 20:39:12.124: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.766348932s Jun 21 20:39:14.224: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.866435683s Jun 21 20:39:16.323: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.96484048s Jun 21 20:39:18.420: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.062450662s Jun 21 20:39:20.519: INFO: Pod "exec-volume-test-preprovisionedpv-brz5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.161191184s [1mSTEP[0m: Saw pod success Jun 21 20:39:20.519: INFO: Pod "exec-volume-test-preprovisionedpv-brz5" satisfied condition "Succeeded or Failed" Jun 21 20:39:20.616: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-brz5 container exec-container-preprovisionedpv-brz5: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:20.820: INFO: Waiting for pod exec-volume-test-preprovisionedpv-brz5 to disappear Jun 21 20:39:20.918: INFO: Pod exec-volume-test-preprovisionedpv-brz5 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-brz5 Jun 21 20:39:20.918: INFO: Deleting pod "exec-volume-test-preprovisionedpv-brz5" in namespace "volume-8304" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":18,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:39:10.748: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 21 20:39:11.420: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b" in namespace "security-context-test-9056" to be "Succeeded or Failed" Jun 21 20:39:11.532: INFO: Pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 112.612582ms Jun 21 20:39:13.645: INFO: Pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224956919s Jun 21 20:39:15.741: INFO: Pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32186259s Jun 21 20:39:17.838: INFO: Pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41882948s Jun 21 20:39:19.940: INFO: Pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519894369s Jun 21 20:39:22.038: INFO: Pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.618416272s Jun 21 20:39:22.038: INFO: Pod "alpine-nnp-false-f884ecea-2a7c-48fa-a470-703b045f0a4b" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:39:22.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-9056" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when creating containers with AllowPrivilegeEscalation [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296[0m should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:22.343: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 21 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-7bf9cf14-c025-4d30-be0b-56a46570ecca [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 20:39:13.337: INFO: Waiting up to 5m0s for pod "pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113" in namespace "secrets-3497" to be "Succeeded or Failed" Jun 21 20:39:13.467: INFO: Pod "pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113": Phase="Pending", Reason="", readiness=false. Elapsed: 130.15346ms Jun 21 20:39:15.565: INFO: Pod "pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228148944s Jun 21 20:39:17.672: INFO: Pod "pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335794739s Jun 21 20:39:19.770: INFO: Pod "pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433683889s Jun 21 20:39:21.868: INFO: Pod "pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.531649916s [1mSTEP[0m: Saw pod success Jun 21 20:39:21.868: INFO: Pod "pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113" satisfied condition "Succeeded or Failed" Jun 21 20:39:21.966: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:22.170: INFO: Waiting for pod pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113 to disappear Jun 21 20:39:22.267: INFO: Pod pod-secrets-0bfc7ae8-d733-4ddf-b2ce-cda50a7e1113 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.141 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0} [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:39:13.465: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir-wrapper [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 16 lines ... [32m• [SLOW TEST:9.717 seconds][0m [sig-storage] EmptyDir wrapper volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m should not conflict [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":6,"skipped":25,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:23.190: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 77 lines ... Jun 21 20:39:01.843: INFO: Creating resource for dynamic PV Jun 21 20:39:01.843: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} [1mSTEP[0m: creating a StorageClass volume-expand-759j275z [1mSTEP[0m: creating a claim [1mSTEP[0m: Expanding non-expandable pvc Jun 21 20:39:02.137: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 21 20:39:02.351: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:04.546: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:06.559: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:08.546: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:10.553: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:12.610: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:14.550: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:16.546: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:18.546: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:20.549: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:22.545: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:24.551: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:26.549: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:28.545: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:30.547: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:32.550: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-759j275z", ... // 3 identical fields } Jun 21 20:39:32.745: INFO: Error updating pvc awslkrwt: PersistentVolumeClaim "awslkrwt" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 24 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (block volmode)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not allow expansion of pvcs without AllowVolumeExpansion property [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":45,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:33.242: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 258 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (ext4)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":3,"skipped":21,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:37.003: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 59 lines ... [32m• [SLOW TEST:55.461 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m removes definition from spec when one version gets changed to not be served [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:37.386: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 122 lines ... [32m• [SLOW TEST:34.412 seconds][0m [sig-storage] PVC Protection [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Verify that PVC in active use by a pod is not removed immediately [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 21 lines ... Jun 21 20:39:11.223: INFO: PersistentVolumeClaim pvc-9cnbp found but phase is Pending instead of Bound. Jun 21 20:39:13.379: INFO: PersistentVolumeClaim pvc-9cnbp found and phase=Bound (8.550402184s) Jun 21 20:39:13.379: INFO: Waiting up to 3m0s for PersistentVolume local-fpx8g to have phase Bound Jun 21 20:39:13.515: INFO: PersistentVolume local-fpx8g found and phase=Bound (135.872913ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-lfh4 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 20:39:13.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lfh4" in namespace "provisioning-5535" to be "Succeeded or Failed" Jun 21 20:39:13.939: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 105.391367ms Jun 21 20:39:16.038: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204399324s Jun 21 20:39:18.136: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 4.302519371s Jun 21 20:39:20.234: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 6.40105699s Jun 21 20:39:22.334: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 8.500342095s Jun 21 20:39:24.444: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 10.610363113s Jun 21 20:39:26.543: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 12.709827246s Jun 21 20:39:28.643: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 14.809411361s Jun 21 20:39:30.747: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 16.913303032s Jun 21 20:39:32.846: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 19.012482065s Jun 21 20:39:34.952: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Running", Reason="", readiness=true. Elapsed: 21.118343491s Jun 21 20:39:37.055: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.221762867s [1mSTEP[0m: Saw pod success Jun 21 20:39:37.055: INFO: Pod "pod-subpath-test-preprovisionedpv-lfh4" satisfied condition "Succeeded or Failed" Jun 21 20:39:37.157: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-lfh4 container test-container-subpath-preprovisionedpv-lfh4: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:37.372: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lfh4 to disappear Jun 21 20:39:37.469: INFO: Pod pod-subpath-test-preprovisionedpv-lfh4 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-lfh4 Jun 21 20:39:37.469: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lfh4" in namespace "provisioning-5535" ... skipping 26 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":43,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 23 lines ... Jun 21 20:39:26.271: INFO: PersistentVolumeClaim pvc-d44t9 found but phase is Pending instead of Bound. Jun 21 20:39:28.373: INFO: PersistentVolumeClaim pvc-d44t9 found and phase=Bound (4.305958948s) Jun 21 20:39:28.373: INFO: Waiting up to 3m0s for PersistentVolume local-x2l7z to have phase Bound Jun 21 20:39:28.470: INFO: PersistentVolume local-x2l7z found and phase=Bound (96.832384ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-ndc6 [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:39:28.775: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ndc6" in namespace "provisioning-2909" to be "Succeeded or Failed" Jun 21 20:39:28.872: INFO: Pod "pod-subpath-test-preprovisionedpv-ndc6": Phase="Pending", Reason="", readiness=false. Elapsed: 97.290829ms Jun 21 20:39:30.979: INFO: Pod "pod-subpath-test-preprovisionedpv-ndc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204249154s Jun 21 20:39:33.078: INFO: Pod "pod-subpath-test-preprovisionedpv-ndc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303179969s Jun 21 20:39:35.178: INFO: Pod "pod-subpath-test-preprovisionedpv-ndc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403192248s Jun 21 20:39:37.277: INFO: Pod "pod-subpath-test-preprovisionedpv-ndc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.502073223s [1mSTEP[0m: Saw pod success Jun 21 20:39:37.277: INFO: Pod "pod-subpath-test-preprovisionedpv-ndc6" satisfied condition "Succeeded or Failed" Jun 21 20:39:37.382: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-ndc6 container test-container-subpath-preprovisionedpv-ndc6: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:37.620: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ndc6 to disappear Jun 21 20:39:37.725: INFO: Pod pod-subpath-test-preprovisionedpv-ndc6 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-ndc6 Jun 21 20:39:37.725: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ndc6" in namespace "provisioning-2909" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":33,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:40.480: INFO: Driver emptydir doesn't support ext4 -- skipping ... skipping 23 lines ... Jun 21 20:39:39.275: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs Jun 21 20:39:39.866: INFO: Waiting up to 5m0s for pod "pod-b5ac6372-86fe-4dfe-8ffb-807cc6a56cf3" in namespace "emptydir-1797" to be "Succeeded or Failed" Jun 21 20:39:39.964: INFO: Pod "pod-b5ac6372-86fe-4dfe-8ffb-807cc6a56cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 97.980077ms Jun 21 20:39:42.066: INFO: Pod "pod-b5ac6372-86fe-4dfe-8ffb-807cc6a56cf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20060445s [1mSTEP[0m: Saw pod success Jun 21 20:39:42.066: INFO: Pod "pod-b5ac6372-86fe-4dfe-8ffb-807cc6a56cf3" satisfied condition "Succeeded or Failed" Jun 21 20:39:42.165: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-b5ac6372-86fe-4dfe-8ffb-807cc6a56cf3 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:42.370: INFO: Waiting for pod pod-b5ac6372-86fe-4dfe-8ffb-807cc6a56cf3 to disappear Jun 21 20:39:42.466: INFO: Pod pod-b5ac6372-86fe-4dfe-8ffb-807cc6a56cf3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 48 lines ... Jun 21 20:39:37.602: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 21 20:39:37.602: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7108 describe pod agnhost-primary-5cn27' Jun 21 20:39:38.474: INFO: stderr: "" Jun 21 20:39:38.474: INFO: stdout: "Name: agnhost-primary-5cn27\nNamespace: kubectl-7108\nPriority: 0\nNode: ip-172-20-0-5.eu-west-2.compute.internal/172.20.0.5\nStart Time: Tue, 21 Jun 2022 20:39:24 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 100.96.9.23\nIPs:\n IP: 100.96.9.23\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://89aa037017bd48f1d7256ef5959111461b740211b9b294e69c84657420c4a6ff\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 21 Jun 2022 20:39:29 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-598vr (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-598vr:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 13s default-scheduler Successfully assigned kubectl-7108/agnhost-primary-5cn27 to ip-172-20-0-5.eu-west-2.compute.internal\n Normal Pulled 9s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n Normal Created 9s kubelet Created container agnhost-primary\n Normal Started 9s kubelet Started container agnhost-primary\n" Jun 21 20:39:38.474: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7108 describe rc agnhost-primary' Jun 21 20:39:39.464: INFO: stderr: "" Jun 21 20:39:39.464: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7108\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 15s replication-controller Created pod: agnhost-primary-5cn27\n" Jun 21 20:39:39.464: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7108 describe service agnhost-primary' Jun 21 20:39:40.461: INFO: stderr: "" Jun 21 20:39:40.461: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7108\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.68.30.95\nIPs: 100.68.30.95\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.9.23:6379\nSession Affinity: None\nEvents: <none>\n" Jun 21 20:39:40.559: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7108 describe node ip-172-20-0-148.eu-west-2.compute.internal' Jun 21 20:39:41.857: INFO: stderr: "" Jun 21 20:39:41.857: INFO: stdout: "Name: ip-172-20-0-148.eu-west-2.compute.internal\nRoles: node\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=t3.medium\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=eu-west-2\n failure-domain.beta.kubernetes.io/zone=eu-west-2a\n kubelet_cleanup=true\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-172-20-0-148.eu-west-2.compute.internal\n kubernetes.io/os=linux\n kubernetes.io/role=node\n node-role.kubernetes.io/node=\n node.kubernetes.io/instance-type=t3.medium\n topology.ebs.csi.aws.com/zone=eu-west-2a\n topology.kubernetes.io/region=eu-west-2\n topology.kubernetes.io/zone=eu-west-2a\nAnnotations: csi.volume.kubernetes.io/nodeid: {\"ebs.csi.aws.com\":\"i-0a740318a9456a046\"}\n io.cilium.network.ipv4-cilium-host: 100.96.8.243\n io.cilium.network.ipv4-health-ip: 100.96.8.180\n io.cilium.network.ipv4-pod-cidr: 100.96.8.0/24\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 21 Jun 2022 20:33:02 +0000\nTaints: <none>\nUnschedulable: false\nLease:\n HolderIdentity: ip-172-20-0-148.eu-west-2.compute.internal\n AcquireTime: <unset>\n RenewTime: Tue, 21 Jun 2022 20:39:40 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Tue, 21 Jun 2022 20:33:21 +0000 Tue, 21 Jun 2022 20:33:21 +0000 CiliumIsUp Cilium is running on this node\n MemoryPressure False Tue, 21 Jun 2022 20:39:39 +0000 Tue, 21 Jun 2022 20:33:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 21 Jun 2022 20:39:39 +0000 Tue, 21 Jun 2022 20:33:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 21 Jun 2022 20:39:39 +0000 Tue, 21 Jun 2022 20:33:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 21 Jun 2022 20:39:39 +0000 Tue, 21 Jun 2022 20:33:22 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 172.20.0.148\n ExternalIP: 35.176.67.196\n Hostname: ip-172-20-0-148.eu-west-2.compute.internal\n InternalDNS: ip-172-20-0-148.eu-west-2.compute.internal\n ExternalDNS: ec2-35-176-67-196.eu-west-2.compute.amazonaws.com\nCapacity:\n cpu: 2\n ephemeral-storage: 130045936Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 3969476Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 119850334420\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 3867076Ki\n pods: 110\nSystem Info:\n Machine ID: ec2c257865a5b2a4bef8ab9f133ed7c7\n System UUID: ec2c2578-65a5-b2a4-bef8-ab9f133ed7c7\n Boot ID: 91a56b7a-ccd3-47cd-acff-dcabbb2efd60\n Kernel Version: 5.4.0-1029-aws\n OS Image: Ubuntu 20.04.1 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.6\n Kubelet Version: v1.23.1\n Kube-Proxy Version: v1.23.1\nPodCIDR: 100.96.8.0/24\nPodCIDRs: 100.96.8.0/24\nProviderID: aws:///eu-west-2a/i-0a740318a9456a046\nNon-terminated Pods: (12 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n apply-4799 deployment-shared-unset-c757c87b9-w5t2w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32s\n configmap-7288 pod-configmaps-65df3973-766e-40ea-9651-0631824f2370 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84s\n kube-system cilium-9rnwd 100m (5%) 0 (0%) 128Mi (3%) 100Mi (2%) 6m39s\n kube-system ebs-csi-node-kfwbk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m39s\n kube-system hubble-relay-55846f56fb-vgrbw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m21s\n kube-system kube-proxy-ip-172-20-0-148.eu-west-2.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6m38s\n kube-system node-local-dns-qm82g 25m (1%) 0 (0%) 5Mi (0%) 0 (0%) 6m39s\n kubectl-7398 httpd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17s\n nettest-5867 netserver-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18s\n pod-network-test-7191 netserver-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26s\n provisioning-6869 pod-subpath-test-inlinevolume-jnrd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0s\n services-4641 affinity-clusterip-timeout-pgk7k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 225m (11%) 0 (0%)\n memory 133Mi (3%) 100Mi (2%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 6m34s kube-proxy \n Warning listen tcp4 :32585: bind: address already in use 83s kube-proxy can't open port \"nodePort for services-8473/nodeport-range-test\" (:32585/tcp4), skipping it\n Warning listen tcp4 :32239: bind: address already in use 70s kube-proxy can't open port \"nodePort for services-1160/externalname-service:http\" (:32239/tcp4), skipping it\n Warning InvalidDiskCapacity 6m39s kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientPID 6m39s kubelet Node ip-172-20-0-148.eu-west-2.compute.internal status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 6m39s kubelet Updated Node Allocatable limit across pods\n Normal Starting 6m39s kubelet Starting kubelet.\n Normal NodeHasSufficientMemory 6m39s kubelet Node ip-172-20-0-148.eu-west-2.compute.internal status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 6m39s kubelet Node ip-172-20-0-148.eu-west-2.compute.internal status is now: NodeHasNoDiskPressure\n Normal NodeReady 6m19s kubelet Node ip-172-20-0-148.eu-west-2.compute.internal status is now: NodeReady\n" ... skipping 11 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl describe [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1107[0m should check if kubectl describe prints relevant information for rc and pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:43.025: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 156 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":1,"skipped":14,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:44.550: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 135 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]","total":-1,"completed":5,"skipped":33,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:44.577: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 111 lines ... [32m• [SLOW TEST:88.180 seconds][0m [sig-apps] CronJob [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should remove from active list jobs that have been deleted [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:239[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":4,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 128 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m Clean up pods on node [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279[0m kubelet should be able to delete 10 pods per node in 1m0s. [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":3,"skipped":22,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:45.856: INFO: Driver aws doesn't support ext3 -- skipping ... skipping 119 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:39:45.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-6865" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 5 lines ... [It] should support readOnly directory specified in the volumeMount /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365 Jun 21 20:39:40.983: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Jun 21 20:39:40.983: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-jnrd [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:39:41.085: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jnrd" in namespace "provisioning-6869" to be "Succeeded or Failed" Jun 21 20:39:41.186: INFO: Pod "pod-subpath-test-inlinevolume-jnrd": Phase="Pending", Reason="", readiness=false. Elapsed: 101.54348ms Jun 21 20:39:43.304: INFO: Pod "pod-subpath-test-inlinevolume-jnrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219584092s Jun 21 20:39:45.404: INFO: Pod "pod-subpath-test-inlinevolume-jnrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319595571s Jun 21 20:39:47.502: INFO: Pod "pod-subpath-test-inlinevolume-jnrd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.417812535s [1mSTEP[0m: Saw pod success Jun 21 20:39:47.502: INFO: Pod "pod-subpath-test-inlinevolume-jnrd" satisfied condition "Succeeded or Failed" Jun 21 20:39:47.600: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-jnrd container test-container-subpath-inlinevolume-jnrd: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:47.825: INFO: Waiting for pod pod-subpath-test-inlinevolume-jnrd to disappear Jun 21 20:39:47.922: INFO: Pod pod-subpath-test-inlinevolume-jnrd no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-jnrd Jun 21 20:39:47.922: INFO: Deleting pod "pod-subpath-test-inlinevolume-jnrd" in namespace "provisioning-6869" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":41,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 167 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:54 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 21 20:39:45.709: INFO: Waiting up to 5m0s for pod "pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332" in namespace "emptydir-5750" to be "Succeeded or Failed" Jun 21 20:39:45.806: INFO: Pod "pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332": Phase="Pending", Reason="", readiness=false. Elapsed: 96.920168ms Jun 21 20:39:47.904: INFO: Pod "pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195393831s Jun 21 20:39:50.011: INFO: Pod "pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.302393796s [1mSTEP[0m: Saw pod success Jun 21 20:39:50.011: INFO: Pod "pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332" satisfied condition "Succeeded or Failed" Jun 21 20:39:50.109: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:50.320: INFO: Waiting for pod pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332 to disappear Jun 21 20:39:50.417: INFO: Pod pod-cbe0ee7b-b14c-4acd-9dd6-38ee51c8f332 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:47[0m new files should be created with FSGroup ownership when container is root [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:54[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":2,"skipped":8,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 57 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when create a pod with lifecycle hook [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44[0m should execute poststart exec hook properly [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:51.442: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 66 lines ... [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:39:45.879: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename job [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to exceed backoffLimit /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:345 [1mSTEP[0m: Creating a job [1mSTEP[0m: Ensuring job exceed backofflimit [1mSTEP[0m: Checking that 2 pod created and status is failed [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:39:52.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-3078" for this suite. [32m• [SLOW TEST:7.084 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should fail to exceed backoffLimit [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:345[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":4,"skipped":37,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":2,"skipped":15,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:39:12.946: INFO: >>> kubeConfig: /root/.kube/config ... skipping 96 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":15,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [32m• [SLOW TEST:97.138 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes ... skipping 146 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23[0m Granular Checks: Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30[0m should function for intra-pod communication: http [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:55.081: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 32 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:39:55.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-8366" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:55.433: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 42 lines ... [32m• [SLOW TEST:98.772 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m updates should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:55.713: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 42 lines ... Jun 21 20:39:57.161: INFO: AfterEach: Cleaning up test resources. Jun 21 20:39:57.161: INFO: pvc is nil Jun 21 20:39:57.161: INFO: Deleting PersistentVolume "hostpath-v26qm" [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":2,"skipped":3,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:39:52.973: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-7d4b5449-42a8-4f2c-8e3f-5fb87db96bf9 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:39:53.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2" in namespace "projected-1491" to be "Succeeded or Failed" Jun 21 20:39:53.773: INFO: Pod "pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 99.512441ms Jun 21 20:39:55.888: INFO: Pod "pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214491581s Jun 21 20:39:57.985: INFO: Pod "pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312133318s [1mSTEP[0m: Saw pod success Jun 21 20:39:57.985: INFO: Pod "pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2" satisfied condition "Succeeded or Failed" Jun 21 20:39:58.085: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:39:58.301: INFO: Waiting for pod pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2 to disappear Jun 21 20:39:58.398: INFO: Pod pod-projected-configmaps-cf8bbf31-4907-4878-af07-0c2d0a87b2b2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.632 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:39:58.607: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 23 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:39:42.666: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename resourcequota [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 125 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":2,"skipped":4,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy ... skipping 101 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":2,"skipped":17,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:04.513: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 23 lines ... Jun 21 20:39:58.617: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs Jun 21 20:39:59.476: INFO: Waiting up to 5m0s for pod "pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d" in namespace "emptydir-7807" to be "Succeeded or Failed" Jun 21 20:39:59.600: INFO: Pod "pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d": Phase="Pending", Reason="", readiness=false. Elapsed: 123.431733ms Jun 21 20:40:01.707: INFO: Pod "pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230289807s Jun 21 20:40:03.809: INFO: Pod "pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332822069s [1mSTEP[0m: Saw pod success Jun 21 20:40:03.809: INFO: Pod "pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d" satisfied condition "Succeeded or Failed" Jun 21 20:40:03.906: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:04.141: INFO: Waiting for pod pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d to disappear Jun 21 20:40:04.238: INFO: Pod pod-760ce7c3-e74d-4054-8b89-3449ed9cad7d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.910 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:04.534: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping ... skipping 14 lines ... [36mDriver hostPath doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeConformance]","total":-1,"completed":5,"skipped":75,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:39:50.991: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename kubectl [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 195 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Guestbook application [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339[0m should create and stop a working application [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":6,"skipped":75,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:06.577: INFO: Only supported for providers [gce gke] (not aws) ... skipping 84 lines ... [32m• [SLOW TEST:86.615 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:07.201: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 62 lines ... Jun 21 20:39:56.637: INFO: PersistentVolumeClaim pvc-94zmp found but phase is Pending instead of Bound. Jun 21 20:39:58.769: INFO: PersistentVolumeClaim pvc-94zmp found and phase=Bound (4.343895189s) Jun 21 20:39:58.770: INFO: Waiting up to 3m0s for PersistentVolume local-p7cl2 to have phase Bound Jun 21 20:39:58.866: INFO: PersistentVolume local-p7cl2 found and phase=Bound (96.50662ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5mf7 [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:39:59.288: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5mf7" in namespace "provisioning-3610" to be "Succeeded or Failed" Jun 21 20:39:59.436: INFO: Pod "pod-subpath-test-preprovisionedpv-5mf7": Phase="Pending", Reason="", readiness=false. Elapsed: 147.998288ms Jun 21 20:40:01.534: INFO: Pod "pod-subpath-test-preprovisionedpv-5mf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245278879s Jun 21 20:40:03.639: INFO: Pod "pod-subpath-test-preprovisionedpv-5mf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350856035s Jun 21 20:40:05.738: INFO: Pod "pod-subpath-test-preprovisionedpv-5mf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.449333207s [1mSTEP[0m: Saw pod success Jun 21 20:40:05.738: INFO: Pod "pod-subpath-test-preprovisionedpv-5mf7" satisfied condition "Succeeded or Failed" Jun 21 20:40:05.836: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-5mf7 container test-container-volume-preprovisionedpv-5mf7: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:06.060: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5mf7 to disappear Jun 21 20:40:06.163: INFO: Pod pod-subpath-test-preprovisionedpv-5mf7 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5mf7 Jun 21 20:40:06.164: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5mf7" in namespace "provisioning-3610" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":14,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:07.554: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 124 lines ... [36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0} [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:40:01.279: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename hostpath [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 [1mSTEP[0m: Creating a pod to test hostPath subPath Jun 21 20:40:01.878: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-152" to be "Succeeded or Failed" Jun 21 20:40:02.003: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 125.465012ms Jun 21 20:40:04.101: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22366054s Jun 21 20:40:06.199: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321125769s Jun 21 20:40:08.296: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418119303s [1mSTEP[0m: Saw pod success Jun 21 20:40:08.296: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 21 20:40:08.392: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-host-path-test container test-container-2: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:08.595: INFO: Waiting for pod pod-host-path-test to disappear Jun 21 20:40:08.691: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.620 seconds][0m [sig-storage] HostPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support subPath [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":4,"skipped":9,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:08.903: INFO: Only supported for providers [vsphere] (not aws) ... skipping 90 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":9,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:09.888: INFO: Only supported for providers [azure] (not aws) ... skipping 48 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:70 [1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium Jun 21 20:40:05.166: INFO: Waiting up to 5m0s for pod "pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f" in namespace "emptydir-7307" to be "Succeeded or Failed" Jun 21 20:40:05.267: INFO: Pod "pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f": Phase="Pending", Reason="", readiness=false. Elapsed: 101.366424ms Jun 21 20:40:07.368: INFO: Pod "pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201701419s Jun 21 20:40:09.469: INFO: Pod "pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.303395695s [1mSTEP[0m: Saw pod success Jun 21 20:40:09.469: INFO: Pod "pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f" satisfied condition "Succeeded or Failed" Jun 21 20:40:09.599: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:09.838: INFO: Waiting for pod pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f to disappear Jun 21 20:40:09.939: INFO: Pod pod-6a0a15ad-aa7d-4340-9b5d-b8eae2dce58f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 65 lines ... Jun 21 20:39:51.596: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 cp /tmp/icc-override3946417445/invalid-configmap-without-namespace.yaml kubectl-7398/httpd:/tmp/' Jun 21 20:39:53.881: INFO: stderr: "" Jun 21 20:39:53.881: INFO: stdout: "" [1mSTEP[0m: getting pods with in-cluster configs Jun 21 20:39:53.881: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --v=6 2>&1' Jun 21 20:39:57.007: INFO: stderr: "+ /tmp/kubectl get pods '--v=6'\n" Jun 21 20:39:57.007: INFO: stdout: "I0621 20:39:56.620403 152 merged_client_builder.go:163] Using in-cluster namespace\nI0621 20:39:56.621399 152 merged_client_builder.go:121] Using in-cluster configuration\nI0621 20:39:56.658374 152 round_trippers.go:553] GET https://100.64.0.1:443/api?timeout=32s 200 OK in 35 milliseconds\nI0621 20:39:56.673095 152 round_trippers.go:553] GET https://100.64.0.1:443/apis?timeout=32s 200 OK in 5 milliseconds\nI0621 20:39:56.690239 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0621 20:39:56.691249 152 round_trippers.go:553] GET https://100.64.0.1:443/api/v1?timeout=32s 200 OK in 2 milliseconds\nI0621 20:39:56.692952 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:39:56.693681 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:39:56.693708 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/apps/v1?timeout=32s 200 OK in 5 milliseconds\nI0621 20:39:56.695135 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/node.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:39:56.695166 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0621 20:39:56.695182 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0621 20:39:56.695217 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:39:56.695233 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:39:56.695908 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:39:56.702746 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/autoscaling/v2?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.702871 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.703484 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 14 milliseconds\nI0621 20:39:56.703559 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 14 milliseconds\nI0621 20:39:56.703603 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/batch/v1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703635 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703668 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/policy/v1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703700 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/policy/v1beta1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703729 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/discovery.k8s.io/v1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703763 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703797 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703827 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.703856 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta2?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.703885 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:39:56.703932 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 14 milliseconds\nI0621 20:39:56.703966 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.704000 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 14 milliseconds\nI0621 20:39:56.704036 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/cert-manager.io/v1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.704076 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/autoscaling/v1?timeout=32s 200 OK in 14 milliseconds\nI0621 20:39:56.704112 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/cilium.io/v2?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.704147 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/cilium.io/v2alpha1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.704178 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 12 milliseconds\nI0621 20:39:56.704216 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/elbv2.k8s.aws/v1alpha1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.704271 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/elbv2.k8s.aws/v1beta1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.704302 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/acme.cert-manager.io/v1?timeout=32s 200 OK in 12 milliseconds\nI0621 20:39:56.704331 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 14 milliseconds\nI0621 20:39:56.918039 152 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:39:56.918069 152 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:39:56.929456 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 2 milliseconds\nI0621 20:39:56.934301 152 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:39:56.934509 152 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:39:56.934679 152 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request\nI0621 20:39:56.940425 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 5 milliseconds\nI0621 20:39:56.945354 152 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:39:56.945434 152 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:39:56.946513 152 merged_client_builder.go:121] Using in-cluster configuration\nI0621 20:39:56.948514 152 round_trippers.go:553] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-7398/pods?limit=500 200 OK in 1 milliseconds\nNAME READY STATUS RESTARTS AGE\nhttpd 1/1 Running 0 32s\n" Jun 21 20:39:57.007: INFO: stdout: I0621 20:39:56.620403 152 merged_client_builder.go:163] Using in-cluster namespace I0621 20:39:56.621399 152 merged_client_builder.go:121] Using in-cluster configuration I0621 20:39:56.658374 152 round_trippers.go:553] GET https://100.64.0.1:443/api?timeout=32s 200 OK in 35 milliseconds I0621 20:39:56.673095 152 round_trippers.go:553] GET https://100.64.0.1:443/apis?timeout=32s 200 OK in 5 milliseconds I0621 20:39:56.690239 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0621 20:39:56.691249 152 round_trippers.go:553] GET https://100.64.0.1:443/api/v1?timeout=32s 200 OK in 2 milliseconds ... skipping 29 lines ... I0621 20:39:56.704147 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/cilium.io/v2alpha1?timeout=32s 200 OK in 12 milliseconds I0621 20:39:56.704178 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 12 milliseconds I0621 20:39:56.704216 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/elbv2.k8s.aws/v1alpha1?timeout=32s 200 OK in 12 milliseconds I0621 20:39:56.704271 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/elbv2.k8s.aws/v1beta1?timeout=32s 200 OK in 12 milliseconds I0621 20:39:56.704302 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/acme.cert-manager.io/v1?timeout=32s 200 OK in 12 milliseconds I0621 20:39:56.704331 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 14 milliseconds I0621 20:39:56.918039 152 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:39:56.918069 152 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:39:56.929456 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 2 milliseconds I0621 20:39:56.934301 152 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:39:56.934509 152 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:39:56.934679 152 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0621 20:39:56.940425 152 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 5 milliseconds I0621 20:39:56.945354 152 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:39:56.945434 152 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:39:56.946513 152 merged_client_builder.go:121] Using in-cluster configuration I0621 20:39:56.948514 152 round_trippers.go:553] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-7398/pods?limit=500 200 OK in 1 milliseconds NAME READY STATUS RESTARTS AGE httpd 1/1 Running 0 32s ... skipping 3 lines ... [1mSTEP[0m: creating an object not containing a namespace with in-cluster config Jun 21 20:39:59.497: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1' Jun 21 20:40:02.635: INFO: rc: 255 [1mSTEP[0m: trying to use kubectl with invalid token Jun 21 20:40:02.635: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1' Jun 21 20:40:04.283: INFO: rc: 255 Jun 21 20:40:04.283: INFO: got err error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1: Command stdout: I0621 20:40:04.133458 182 merged_client_builder.go:163] Using in-cluster namespace I0621 20:40:04.133650 182 merged_client_builder.go:121] Using in-cluster configuration I0621 20:40:04.134599 182 round_trippers.go:463] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s I0621 20:40:04.134612 182 round_trippers.go:469] Request Headers: I0621 20:40:04.134622 182 round_trippers.go:473] Authorization: Bearer <masked> ... skipping 5 lines ... I0621 20:40:04.158641 182 round_trippers.go:469] Request Headers: I0621 20:40:04.158648 182 round_trippers.go:473] User-Agent: kubectl/v1.23.1 (linux/amd64) kubernetes/86ec240 I0621 20:40:04.158674 182 round_trippers.go:473] Authorization: Bearer <masked> I0621 20:40:04.158681 182 round_trippers.go:473] Accept: application/json, */* I0621 20:40:04.159698 182 round_trippers.go:574] Response Status: 401 Unauthorized in 1 milliseconds I0621 20:40:04.163977 182 cached_discovery.go:78] skipped caching discovery info due to Unauthorized I0621 20:40:04.164026 182 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: Unauthorized I0621 20:40:04.167889 182 round_trippers.go:463] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s I0621 20:40:04.167902 182 round_trippers.go:469] Request Headers: I0621 20:40:04.167911 182 round_trippers.go:473] Accept: application/json, */* I0621 20:40:04.167919 182 round_trippers.go:473] User-Agent: kubectl/v1.23.1 (linux/amd64) kubernetes/86ec240 I0621 20:40:04.167930 182 round_trippers.go:473] Authorization: Bearer <masked> I0621 20:40:04.169387 182 round_trippers.go:574] Response Status: 401 Unauthorized in 1 milliseconds ... skipping 11 lines ... "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }] F0621 20:40:04.176222 182 helpers.go:118] error: You must be logged in to the server (Unauthorized) goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307e020, 0x3, 0x0, 0xc0004be0e0, 0x2, {0x25f1447, 0x10}, 0xc000060800, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc00014c1c0, 0x3a, 0x0, {0x0, 0x0}, 0x0, {0xc000596f30, 0x1, 0x1}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc00014c1c0, 0x3a}, 0xc000596e70) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x1fecc40, 0xc0006942d0}, 0x1e78210) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:180 +0x69a k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc000133b80, {0xc0005fe420, 0x1, 0x3}) ... skipping 70 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5 stderr: + /tmp/kubectl get pods '--token=invalid' '--v=7' command terminated with exit code 255 error: exit status 255 [1mSTEP[0m: trying to use kubectl with invalid server Jun 21 20:40:04.283: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1' Jun 21 20:40:05.670: INFO: rc: 255 Jun 21 20:40:05.670: INFO: got err error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1: Command stdout: I0621 20:40:05.576858 192 merged_client_builder.go:163] Using in-cluster namespace I0621 20:40:05.601489 192 round_trippers.go:553] GET http://invalid/api?timeout=32s in 24 milliseconds I0621 20:40:05.601778 192 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0621 20:40:05.604613 192 round_trippers.go:553] GET http://invalid/api?timeout=32s in 2 milliseconds I0621 20:40:05.604690 192 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0621 20:40:05.604738 192 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0621 20:40:05.607269 192 round_trippers.go:553] GET http://invalid/api?timeout=32s in 1 milliseconds I0621 20:40:05.607330 192 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0621 20:40:05.609226 192 round_trippers.go:553] GET http://invalid/api?timeout=32s in 1 milliseconds I0621 20:40:05.609282 192 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0621 20:40:05.611607 192 round_trippers.go:553] GET http://invalid/api?timeout=32s in 2 milliseconds I0621 20:40:05.611655 192 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 169.254.20.10:53: no such host I0621 20:40:05.611683 192 helpers.go:237] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 169.254.20.10:53: no such host F0621 20:40:05.611700 192 helpers.go:118] Unable to connect to the server: dial tcp: lookup invalid on 169.254.20.10:53: no such host goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307e020, 0x3, 0x0, 0xc0004d6e00, 0x2, {0x25f1447, 0x10}, 0xc000060800, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc000024180, 0x5b, 0x0, {0x0, 0x0}, 0x35, {0xc0005ceff0, 0x1, 0x1}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc000024180, 0x5b}, 0xc000515230) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x1febee0, 0xc000515230}, 0x1e78210) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:191 +0x7d7 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0006c8c80, {0xc000664f30, 0x1, 0x3}) ... skipping 28 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:179 +0x85 stderr: + /tmp/kubectl get pods '--server=invalid' '--v=6' command terminated with exit code 255 error: exit status 255 [1mSTEP[0m: trying to use kubectl with invalid namespace Jun 21 20:40:05.670: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1' Jun 21 20:40:07.126: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n" Jun 21 20:40:07.126: INFO: stdout: "I0621 20:40:07.018443 202 merged_client_builder.go:121] Using in-cluster configuration\nI0621 20:40:07.032565 202 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 13 milliseconds\nI0621 20:40:07.041200 202 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:40:07.041243 202 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:40:07.044828 202 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 3 milliseconds\nI0621 20:40:07.049430 202 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:40:07.049449 202 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:40:07.049485 202 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request\nI0621 20:40:07.053774 202 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 2 milliseconds\nI0621 20:40:07.058409 202 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:40:07.058425 202 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:40:07.059368 202 merged_client_builder.go:121] Using in-cluster configuration\nI0621 20:40:07.068394 202 round_trippers.go:553] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 8 milliseconds\nNo resources found in invalid namespace.\n" Jun 21 20:40:07.126: INFO: stdout: I0621 20:40:07.018443 202 merged_client_builder.go:121] Using in-cluster configuration I0621 20:40:07.032565 202 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 13 milliseconds I0621 20:40:07.041200 202 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:40:07.041243 202 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:40:07.044828 202 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 3 milliseconds I0621 20:40:07.049430 202 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:40:07.049449 202 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:40:07.049485 202 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0621 20:40:07.053774 202 round_trippers.go:553] GET https://100.64.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 2 milliseconds I0621 20:40:07.058409 202 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:40:07.058425 202 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:40:07.059368 202 merged_client_builder.go:121] Using in-cluster configuration I0621 20:40:07.068394 202 round_trippers.go:553] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 8 milliseconds No resources found in invalid namespace. [1mSTEP[0m: trying to use kubectl with kubeconfig Jun 21 20:40:07.126: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7398 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1' Jun 21 20:40:08.626: INFO: stderr: "+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\n" Jun 21 20:40:08.626: INFO: stdout: "I0621 20:40:08.326468 213 loader.go:372] Config loaded from file: /tmp/icc-override.kubeconfig\nI0621 20:40:08.348712 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 18 milliseconds\nI0621 20:40:08.358062 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 1 milliseconds\nI0621 20:40:08.371056 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:40:08.371364 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds\nI0621 20:40:08.371447 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds\nI0621 20:40:08.371639 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds\nI0621 20:40:08.371812 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/policy/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0621 20:40:08.372013 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds\nI0621 20:40:08.372132 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/apps/v1?timeout=32s 200 OK in 5 milliseconds\nI0621 20:40:08.372233 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds\nI0621 20:40:08.372388 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds\nI0621 20:40:08.372496 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/flowcontrol.apiserver.k8s.io/v1beta2?timeout=32s 200 OK in 3 milliseconds\nI0621 20:40:08.372622 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0621 20:40:08.372845 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds\nI0621 20:40:08.372958 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0621 20:40:08.373099 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0621 20:40:08.374864 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0621 20:40:08.374892 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.374931 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0621 20:40:08.374950 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/autoscaling/v1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.375454 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/batch/v1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.375493 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.375508 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.375536 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/policy/v1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.375905 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/autoscaling/v2?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.375964 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.375997 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.376037 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/batch/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.376073 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.376167 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.376483 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.376669 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/cert-manager.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.376908 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/elbv2.k8s.aws/v1alpha1?timeout=32s 200 OK in 7 milliseconds\nI0621 20:40:08.378362 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/crd-publish-openapi-test-unknown-in-nested.example.com/v1?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.378412 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/elbv2.k8s.aws/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.378455 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/cilium.io/v2?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.378529 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/cilium.io/v2alpha1?timeout=32s 200 OK in 8 milliseconds\nI0621 20:40:08.378553 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 8 milliseconds\nI0621 20:40:08.382072 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 15 milliseconds\nI0621 20:40:08.382372 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/acme.cert-manager.io/v1?timeout=32s 200 OK in 13 milliseconds\nI0621 20:40:08.519059 213 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:40:08.519084 213 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:40:08.552087 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 5 milliseconds\nI0621 20:40:08.558187 213 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:40:08.558214 213 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:40:08.558255 213 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request\nI0621 20:40:08.562059 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 3 milliseconds\nI0621 20:40:08.566920 213 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }\nI0621 20:40:08.566941 213 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request\nI0621 20:40:08.569854 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 1 milliseconds\nNo resources found in default namespace.\n" Jun 21 20:40:08.627: INFO: stdout: I0621 20:40:08.326468 213 loader.go:372] Config loaded from file: /tmp/icc-override.kubeconfig I0621 20:40:08.348712 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 18 milliseconds I0621 20:40:08.358062 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 1 milliseconds I0621 20:40:08.371056 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 4 milliseconds I0621 20:40:08.371364 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds I0621 20:40:08.371447 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds ... skipping 29 lines ... I0621 20:40:08.378412 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/elbv2.k8s.aws/v1beta1?timeout=32s 200 OK in 8 milliseconds I0621 20:40:08.378455 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/cilium.io/v2?timeout=32s 200 OK in 8 milliseconds I0621 20:40:08.378529 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/cilium.io/v2alpha1?timeout=32s 200 OK in 8 milliseconds I0621 20:40:08.378553 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 8 milliseconds I0621 20:40:08.382072 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 15 milliseconds I0621 20:40:08.382372 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/acme.cert-manager.io/v1?timeout=32s 200 OK in 13 milliseconds I0621 20:40:08.519059 213 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:40:08.519084 213 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:40:08.552087 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 5 milliseconds I0621 20:40:08.558187 213 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:40:08.558214 213 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:40:08.558255 213 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0621 20:40:08.562059 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 503 Service Unavailable in 3 milliseconds I0621 20:40:08.566920 213 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0621 20:40:08.566941 213 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0621 20:40:08.569854 213 round_trippers.go:553] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 1 milliseconds No resources found in default namespace. [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:387 ... skipping 18 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should handle in-cluster config [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:654[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":7,"skipped":41,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:11.190: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 60 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl patch [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1483[0m should add annotations for pods in rc [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":7,"skipped":56,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:12.491: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 21 lines ... Jun 21 20:40:09.905: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium Jun 21 20:40:10.497: INFO: Waiting up to 5m0s for pod "pod-38f8e3f2-c626-4269-b78e-5db845346b36" in namespace "emptydir-4221" to be "Succeeded or Failed" Jun 21 20:40:10.593: INFO: Pod "pod-38f8e3f2-c626-4269-b78e-5db845346b36": Phase="Pending", Reason="", readiness=false. Elapsed: 96.057585ms Jun 21 20:40:12.771: INFO: Pod "pod-38f8e3f2-c626-4269-b78e-5db845346b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273727381s Jun 21 20:40:14.868: INFO: Pod "pod-38f8e3f2-c626-4269-b78e-5db845346b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.370699843s [1mSTEP[0m: Saw pod success Jun 21 20:40:14.868: INFO: Pod "pod-38f8e3f2-c626-4269-b78e-5db845346b36" satisfied condition "Succeeded or Failed" Jun 21 20:40:14.969: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-38f8e3f2-c626-4269-b78e-5db845346b36 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:15.212: INFO: Waiting for pod pod-38f8e3f2-c626-4269-b78e-5db845346b36 to disappear Jun 21 20:40:15.309: INFO: Pod pod-38f8e3f2-c626-4269-b78e-5db845346b36 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.602 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:15.511: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 82 lines ... Jun 21 20:40:01.638: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Setting timeout (1s) shorter than webhook latency (5s) [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) [1mSTEP[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Having no error when timeout is longer than webhook latency [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Having no error when timeout is empty (defaulted to 10s in v1) [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:15.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-8797" for this suite. [1mSTEP[0m: Destroying namespace "webhook-8797-markers" for this suite. ... skipping 4 lines ... [32m• [SLOW TEST:22.150 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should honor timeout [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:40:08.914: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-9dbe5bf4-225d-46ac-87c2-569021884345 [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 20:40:10.011: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2" in namespace "projected-3888" to be "Succeeded or Failed" Jun 21 20:40:10.112: INFO: Pod "pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2": Phase="Pending", Reason="", readiness=false. Elapsed: 100.74109ms Jun 21 20:40:12.209: INFO: Pod "pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198136001s Jun 21 20:40:14.308: INFO: Pod "pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296489209s Jun 21 20:40:16.410: INFO: Pod "pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.398559384s [1mSTEP[0m: Saw pod success Jun 21 20:40:16.410: INFO: Pod "pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2" satisfied condition "Succeeded or Failed" Jun 21 20:40:16.515: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:16.726: INFO: Waiting for pod pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2 to disappear Jun 21 20:40:16.823: INFO: Pod pod-projected-secrets-af46ef7a-88bc-4885-8787-19ec88ac38a2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 5 lines ... [32m• [SLOW TEST:8.204 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":5,"skipped":18,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 70 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47[0m should be mountable [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":7,"skipped":42,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:18.011: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 86 lines ... [32m• [SLOW TEST:5.771 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should support remote command execution over websockets [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":68,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":21,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:40:10.150: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename webhook [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 6 lines ... [1mSTEP[0m: Wait for the deployment to be ready Jun 21 20:40:12.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 21 20:40:14.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 21, 20, 40, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} [1mSTEP[0m: Deploying the webhook service [1mSTEP[0m: Verifying the service has paired with the endpoint Jun 21 20:40:17.263: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API [1mSTEP[0m: create a namespace for the webhook [1mSTEP[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:17.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-2727" for this suite. ... skipping 2 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m• [SLOW TEST:8.581 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should unconditionally reject operations on fail closed webhook [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:18.732: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 49 lines ... [32m• [SLOW TEST:22.822 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for CRD preserving unknown fields in an embedded object [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes ... skipping 5 lines ... [It] should allow exec of files on the volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196 Jun 21 20:40:18.759: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Jun 21 20:40:18.759: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-jqtz [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 20:40:18.863: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-jqtz" in namespace "volume-295" to be "Succeeded or Failed" Jun 21 20:40:18.960: INFO: Pod "exec-volume-test-inlinevolume-jqtz": Phase="Pending", Reason="", readiness=false. Elapsed: 97.131872ms Jun 21 20:40:21.060: INFO: Pod "exec-volume-test-inlinevolume-jqtz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.196831738s [1mSTEP[0m: Saw pod success Jun 21 20:40:21.060: INFO: Pod "exec-volume-test-inlinevolume-jqtz" satisfied condition "Succeeded or Failed" Jun 21 20:40:21.164: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod exec-volume-test-inlinevolume-jqtz container exec-container-inlinevolume-jqtz: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:21.377: INFO: Waiting for pod exec-volume-test-inlinevolume-jqtz to disappear Jun 21 20:40:21.476: INFO: Pod exec-volume-test-inlinevolume-jqtz no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-jqtz Jun 21 20:40:21.476: INFO: Deleting pod "exec-volume-test-inlinevolume-jqtz" in namespace "volume-295" [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:21.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-295" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":69,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:21.783: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 46 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:22.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-613" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":4,"skipped":5,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:22.685: INFO: Only supported for providers [vsphere] (not aws) ... skipping 71 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 [1mSTEP[0m: Creating a pod to test hostPath r/w Jun 21 20:40:19.341: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1825" to be "Succeeded or Failed" Jun 21 20:40:19.438: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 97.54118ms Jun 21 20:40:21.541: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200705566s Jun 21 20:40:23.639: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.298699809s [1mSTEP[0m: Saw pod success Jun 21 20:40:23.639: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 21 20:40:23.742: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-host-path-test container test-container-2: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:23.953: INFO: Waiting for pod pod-host-path-test to disappear Jun 21 20:40:24.051: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.518 seconds][0m [sig-storage] HostPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support r/w [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":5,"skipped":27,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:24.260: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 46 lines ... [32m• [SLOW TEST:6.347 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m deployment reaping should cascade to its replica sets and pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:95[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":8,"skipped":50,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 17 lines ... Jun 21 20:39:42.680: INFO: PersistentVolumeClaim pvc-7q5sp found but phase is Pending instead of Bound. Jun 21 20:39:44.780: INFO: PersistentVolumeClaim pvc-7q5sp found and phase=Bound (6.393236271s) Jun 21 20:39:44.781: INFO: Waiting up to 3m0s for PersistentVolume aws-z2lkt to have phase Bound Jun 21 20:39:44.877: INFO: PersistentVolume aws-z2lkt found and phase=Bound (96.786019ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-gkwt [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 20:39:45.176: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-gkwt" in namespace "volume-4809" to be "Succeeded or Failed" Jun 21 20:39:45.274: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 97.839706ms Jun 21 20:39:47.373: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196454554s Jun 21 20:39:49.477: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300435647s Jun 21 20:39:51.576: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400007437s Jun 21 20:39:53.712: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535191434s Jun 21 20:39:55.832: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656164756s Jun 21 20:39:57.931: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.754905132s Jun 21 20:40:00.044: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.867315856s Jun 21 20:40:02.154: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.977218838s Jun 21 20:40:04.257: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Pending", Reason="", readiness=false. Elapsed: 19.080728226s Jun 21 20:40:06.356: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Running", Reason="", readiness=true. Elapsed: 21.17937141s Jun 21 20:40:08.454: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.278077993s [1mSTEP[0m: Saw pod success Jun 21 20:40:08.454: INFO: Pod "exec-volume-test-preprovisionedpv-gkwt" satisfied condition "Succeeded or Failed" Jun 21 20:40:08.552: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-gkwt container exec-container-preprovisionedpv-gkwt: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:08.764: INFO: Waiting for pod exec-volume-test-preprovisionedpv-gkwt to disappear Jun 21 20:40:08.862: INFO: Pod exec-volume-test-preprovisionedpv-gkwt no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-gkwt Jun 21 20:40:08.862: INFO: Deleting pod "exec-volume-test-preprovisionedpv-gkwt" in namespace "volume-4809" [1mSTEP[0m: Deleting pv and pvc Jun 21 20:40:08.959: INFO: Deleting PersistentVolumeClaim "pvc-7q5sp" Jun 21 20:40:09.077: INFO: Deleting PersistentVolume "aws-z2lkt" Jun 21 20:40:09.361: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0441a836efbd29908", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0441a836efbd29908 is currently attached to i-0a54d9ce3df6ebe23 status code: 400, request id: 0577ec97-8f4e-4942-b80e-5a04a6052ea0 Jun 21 20:40:14.867: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0441a836efbd29908", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0441a836efbd29908 is currently attached to i-0a54d9ce3df6ebe23 status code: 400, request id: 111b0455-f645-4089-912f-e69bb5087df2 Jun 21 20:40:20.376: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0441a836efbd29908", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0441a836efbd29908 is currently attached to i-0a54d9ce3df6ebe23 status code: 400, request id: 11cb10ed-b9de-4600-8a9d-38350e0b04c0 Jun 21 20:40:25.933: INFO: Successfully deleted PD "aws://eu-west-2a/vol-0441a836efbd29908". [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:25.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-4809" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":25,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:26.134: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 41 lines ... Jun 21 20:40:26.141: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:110 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 21 20:40:26.731: INFO: Waiting up to 5m0s for pod "downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314" in namespace "downward-api-8413" to be "Succeeded or Failed" Jun 21 20:40:26.832: INFO: Pod "downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314": Phase="Pending", Reason="", readiness=false. Elapsed: 100.826987ms Jun 21 20:40:28.929: INFO: Pod "downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198048335s Jun 21 20:40:31.027: INFO: Pod "downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295779655s Jun 21 20:40:33.125: INFO: Pod "downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39390865s [1mSTEP[0m: Saw pod success Jun 21 20:40:33.125: INFO: Pod "downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314" satisfied condition "Succeeded or Failed" Jun 21 20:40:33.222: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:33.427: INFO: Waiting for pod downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314 to disappear Jun 21 20:40:33.524: INFO: Pod downward-api-774bf5b7-6c91-4fbf-8283-4ba2f4c97314 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.582 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:110[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":5,"skipped":29,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [32m• [SLOW TEST:17.014 seconds][0m [sig-apps] ReplicationController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should test the lifecycle of a ReplicationController [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:34.138: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 107 lines ... Jun 21 20:40:27.820: INFO: PersistentVolumeClaim pvc-8fmh6 found but phase is Pending instead of Bound. Jun 21 20:40:29.916: INFO: PersistentVolumeClaim pvc-8fmh6 found and phase=Bound (8.488237285s) Jun 21 20:40:29.916: INFO: Waiting up to 3m0s for PersistentVolume local-vh9lc to have phase Bound Jun 21 20:40:30.012: INFO: PersistentVolume local-vh9lc found and phase=Bound (95.809997ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-wqhm [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:40:30.307: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wqhm" in namespace "provisioning-8457" to be "Succeeded or Failed" Jun 21 20:40:30.408: INFO: Pod "pod-subpath-test-preprovisionedpv-wqhm": Phase="Pending", Reason="", readiness=false. Elapsed: 101.699184ms Jun 21 20:40:32.506: INFO: Pod "pod-subpath-test-preprovisionedpv-wqhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199376431s Jun 21 20:40:34.605: INFO: Pod "pod-subpath-test-preprovisionedpv-wqhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.298061363s [1mSTEP[0m: Saw pod success Jun 21 20:40:34.605: INFO: Pod "pod-subpath-test-preprovisionedpv-wqhm" satisfied condition "Succeeded or Failed" Jun 21 20:40:34.700: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-wqhm container test-container-subpath-preprovisionedpv-wqhm: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:34.914: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wqhm to disappear Jun 21 20:40:35.010: INFO: Pod pod-subpath-test-preprovisionedpv-wqhm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-wqhm Jun 21 20:40:35.010: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wqhm" in namespace "provisioning-8457" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":34,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:36.538: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 24 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Jun 21 20:40:34.345: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-55315cbc-1c0f-4098-b4b2-b3f77ab8c086" in namespace "security-context-test-3333" to be "Succeeded or Failed" Jun 21 20:40:34.445: INFO: Pod "busybox-privileged-true-55315cbc-1c0f-4098-b4b2-b3f77ab8c086": Phase="Pending", Reason="", readiness=false. Elapsed: 99.763562ms Jun 21 20:40:36.544: INFO: Pod "busybox-privileged-true-55315cbc-1c0f-4098-b4b2-b3f77ab8c086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.198670424s Jun 21 20:40:36.544: INFO: Pod "busybox-privileged-true-55315cbc-1c0f-4098-b4b2-b3f77ab8c086" satisfied condition "Succeeded or Failed" Jun 21 20:40:36.645: INFO: Got logs for pod "busybox-privileged-true-55315cbc-1c0f-4098-b4b2-b3f77ab8c086": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:36.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-3333" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":6,"skipped":37,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 85 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":51,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:39.230: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 89 lines ... [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:40:39.269: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192 Jun 21 20:40:39.856: INFO: found topology map[topology.kubernetes.io/zone:eu-west-2a] Jun 21 20:40:39.856: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 21 20:40:39.856: INFO: Not enough topologies in cluster -- skipping [1mSTEP[0m: Deleting pvc [1mSTEP[0m: Deleting sc ... skipping 7 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: aws] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [It][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mNot enough topologies in cluster -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199 [90m------------------------------[0m ... skipping 41 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452[0m that expects a client request [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453[0m should support a client that connects, sends NO DATA, and disconnects [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":10,"skipped":72,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:40.225: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 37 lines ... Jun 21 20:40:11.535: INFO: PersistentVolumeClaim pvc-tg4d7 found but phase is Pending instead of Bound. Jun 21 20:40:13.637: INFO: PersistentVolumeClaim pvc-tg4d7 found and phase=Bound (2.199644397s) Jun 21 20:40:13.637: INFO: Waiting up to 3m0s for PersistentVolume local-4rjfq to have phase Bound Jun 21 20:40:13.748: INFO: PersistentVolume local-4rjfq found and phase=Bound (111.700713ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-pwn5 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 20:40:14.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pwn5" in namespace "provisioning-1147" to be "Succeeded or Failed" Jun 21 20:40:14.143: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Pending", Reason="", readiness=false. Elapsed: 98.793589ms Jun 21 20:40:16.240: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195794427s Jun 21 20:40:18.338: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 4.293185429s Jun 21 20:40:20.437: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 6.392735252s Jun 21 20:40:22.535: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 8.490648019s Jun 21 20:40:24.635: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 10.590151655s ... skipping 2 lines ... Jun 21 20:40:30.951: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 16.906950475s Jun 21 20:40:33.049: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 19.004895498s Jun 21 20:40:35.147: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 21.102373713s Jun 21 20:40:37.258: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Running", Reason="", readiness=true. Elapsed: 23.21390999s Jun 21 20:40:39.356: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.311887286s [1mSTEP[0m: Saw pod success Jun 21 20:40:39.356: INFO: Pod "pod-subpath-test-preprovisionedpv-pwn5" satisfied condition "Succeeded or Failed" Jun 21 20:40:39.459: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-pwn5 container test-container-subpath-preprovisionedpv-pwn5: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:39.663: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pwn5 to disappear Jun 21 20:40:39.761: INFO: Pod pod-subpath-test-preprovisionedpv-pwn5 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-pwn5 Jun 21 20:40:39.761: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pwn5" in namespace "provisioning-1147" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":34,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... Jun 21 20:40:39.861: INFO: Creating a PV followed by a PVC Jun 21 20:40:40.064: INFO: Waiting for PV local-pvz5wjs to bind to PVC pvc-hnh9l Jun 21 20:40:40.064: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hnh9l] to have phase Bound Jun 21 20:40:40.164: INFO: PersistentVolumeClaim pvc-hnh9l found and phase=Bound (100.520272ms) Jun 21 20:40:40.164: INFO: Waiting up to 3m0s for PersistentVolume local-pvz5wjs to have phase Bound Jun 21 20:40:40.261: INFO: PersistentVolume local-pvz5wjs found and phase=Bound (97.002034ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 [1mSTEP[0m: local-volume-type: dir Jun 21 20:40:40.577: INFO: Waiting up to 5m0s for pod "pod-6c15938c-9e1d-4ac1-96db-be8ed177fd35" in namespace "persistent-local-volumes-test-9250" to be "Unschedulable" Jun 21 20:40:40.679: INFO: Pod "pod-6c15938c-9e1d-4ac1-96db-be8ed177fd35": Phase="Pending", Reason="", readiness=false. Elapsed: 101.786865ms Jun 21 20:40:40.679: INFO: Pod "pod-6c15938c-9e1d-4ac1-96db-be8ed177fd35" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity ... skipping 14 lines ... [32m• [SLOW TEST:7.741 seconds][0m [sig-storage] PersistentVolumes-local [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Pod with node different from PV's NodeAffinity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347[0m should fail scheduling due to different NodeAffinity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":7,"skipped":30,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:41.896: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 58 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0} [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:38:35.160: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename pv [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 15 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m When pod refers to non-existent ephemeral storage [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53[0m should allow deletion of pod with invalid volume : configmap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":4,"skipped":8,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:42.340: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 113 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:43.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "certificates-1116" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":11,"skipped":74,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:43.988: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping ... skipping 135 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: cinder] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [openstack] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092 [90m------------------------------[0m ... skipping 27 lines ... [1mSTEP[0m: retrieving the pod [1mSTEP[0m: looking for the results for each expected name from probers Jun 21 20:40:25.658: INFO: File wheezy_udp@dns-test-service-3.dns-4971.svc.cluster.local from pod dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 21 20:40:25.778: INFO: File jessie_udp@dns-test-service-3.dns-4971.svc.cluster.local from pod dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 21 20:40:25.778: INFO: Lookups using dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc failed for: [wheezy_udp@dns-test-service-3.dns-4971.svc.cluster.local jessie_udp@dns-test-service-3.dns-4971.svc.cluster.local] Jun 21 20:40:30.879: INFO: File wheezy_udp@dns-test-service-3.dns-4971.svc.cluster.local from pod dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 21 20:40:30.980: INFO: File jessie_udp@dns-test-service-3.dns-4971.svc.cluster.local from pod dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 21 20:40:30.980: INFO: Lookups using dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc failed for: [wheezy_udp@dns-test-service-3.dns-4971.svc.cluster.local jessie_udp@dns-test-service-3.dns-4971.svc.cluster.local] Jun 21 20:40:35.884: INFO: File wheezy_udp@dns-test-service-3.dns-4971.svc.cluster.local from pod dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 21 20:40:35.995: INFO: File jessie_udp@dns-test-service-3.dns-4971.svc.cluster.local from pod dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 21 20:40:35.995: INFO: Lookups using dns-4971/dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc failed for: [wheezy_udp@dns-test-service-3.dns-4971.svc.cluster.local jessie_udp@dns-test-service-3.dns-4971.svc.cluster.local] Jun 21 20:40:40.974: INFO: DNS probes using dns-test-f3ec4455-725b-4db3-9e37-3fbdab7d78fc succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: changing the service to type=ClusterIP [1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4971.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4971.svc.cluster.local; sleep 1; done ... skipping 17 lines ... [32m• [SLOW TEST:44.903 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for ExternalName services [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":3,"skipped":5,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:46.484: INFO: Only supported for providers [vsphere] (not aws) ... skipping 80 lines ... [1mSTEP[0m: Deleting pod verify-service-up-host-exec-pod in namespace services-2139 [1mSTEP[0m: Deleting pod verify-service-up-exec-pod-pvcwv in namespace services-2139 [1mSTEP[0m: verifying service-headless is not up Jun 21 20:40:17.517: INFO: Creating new host exec pod Jun 21 20:40:17.714: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 21 20:40:19.812: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jun 21 20:40:19.813: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2139 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.58.110:80 && echo service-down-failed' Jun 21 20:40:23.127: INFO: rc: 28 Jun 21 20:40:23.127: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.58.110:80 && echo service-down-failed" in pod services-2139/verify-service-down-host-exec-pod: error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2139 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.58.110:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.69.58.110:80 command terminated with exit code 28 error: exit status 28 Output: [1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-2139 [1mSTEP[0m: adding service.kubernetes.io/headless label [1mSTEP[0m: verifying service is not up Jun 21 20:40:23.436: INFO: Creating new host exec pod Jun 21 20:40:23.635: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 21 20:40:25.737: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jun 21 20:40:25.737: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2139 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.120.18:80 && echo service-down-failed' Jun 21 20:40:28.971: INFO: rc: 28 Jun 21 20:40:28.971: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.120.18:80 && echo service-down-failed" in pod services-2139/verify-service-down-host-exec-pod: error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2139 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.120.18:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.65.120.18:80 command terminated with exit code 28 error: exit status 28 Output: [1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-2139 [1mSTEP[0m: removing service.kubernetes.io/headless annotation [1mSTEP[0m: verifying service is up Jun 21 20:40:29.280: INFO: Creating new host exec pod ... skipping 14 lines ... [1mSTEP[0m: Deleting pod verify-service-up-host-exec-pod in namespace services-2139 [1mSTEP[0m: Deleting pod verify-service-up-exec-pod-dj2gd in namespace services-2139 [1mSTEP[0m: verifying service-headless is still not up Jun 21 20:40:41.767: INFO: Creating new host exec pod Jun 21 20:40:41.965: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 21 20:40:44.064: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jun 21 20:40:44.064: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2139 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.58.110:80 && echo service-down-failed' Jun 21 20:40:47.399: INFO: rc: 28 Jun 21 20:40:47.399: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.58.110:80 && echo service-down-failed" in pod services-2139/verify-service-down-host-exec-pod: error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2139 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.58.110:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.69.58.110:80 command terminated with exit code 28 error: exit status 28 Output: [1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-2139 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:47.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ... skipping 5 lines ... [32m• [SLOW TEST:56.250 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should implement service.kubernetes.io/headless [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1940[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":4,"skipped":53,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:47.718: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 123 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:51.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-8175" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 2 lines ... Jun 21 20:40:44.014: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support existing directories when readOnly specified in the volumeSource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395 Jun 21 20:40:44.506: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 21 20:40:44.710: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1007" in namespace "provisioning-1007" to be "Succeeded or Failed" Jun 21 20:40:44.808: INFO: Pod "hostpath-symlink-prep-provisioning-1007": Phase="Pending", Reason="", readiness=false. Elapsed: 98.16136ms Jun 21 20:40:46.907: INFO: Pod "hostpath-symlink-prep-provisioning-1007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.19688375s [1mSTEP[0m: Saw pod success Jun 21 20:40:46.907: INFO: Pod "hostpath-symlink-prep-provisioning-1007" satisfied condition "Succeeded or Failed" Jun 21 20:40:46.907: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1007" in namespace "provisioning-1007" Jun 21 20:40:47.016: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1007" to be fully deleted Jun 21 20:40:47.118: INFO: Creating resource for inline volume Jun 21 20:40:47.118: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source [1mSTEP[0m: Deleting pod Jun 21 20:40:47.119: INFO: Deleting pod "pod-subpath-test-inlinevolume-cnqb" in namespace "provisioning-1007" Jun 21 20:40:47.320: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1007" in namespace "provisioning-1007" to be "Succeeded or Failed" Jun 21 20:40:47.422: INFO: Pod "hostpath-symlink-prep-provisioning-1007": Phase="Pending", Reason="", readiness=false. Elapsed: 101.881941ms Jun 21 20:40:49.531: INFO: Pod "hostpath-symlink-prep-provisioning-1007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211132257s Jun 21 20:40:51.637: INFO: Pod "hostpath-symlink-prep-provisioning-1007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316334623s [1mSTEP[0m: Saw pod success Jun 21 20:40:51.637: INFO: Pod "hostpath-symlink-prep-provisioning-1007" satisfied condition "Succeeded or Failed" Jun 21 20:40:51.637: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1007" in namespace "provisioning-1007" Jun 21 20:40:51.745: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1007" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:40:51.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-1007" for this suite. ... skipping 65 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 20:40:52.873: INFO: Waiting up to 5m0s for pod "metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c" in namespace "downward-api-5552" to be "Succeeded or Failed" Jun 21 20:40:52.970: INFO: Pod "metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 97.390665ms Jun 21 20:40:55.071: INFO: Pod "metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19776412s Jun 21 20:40:57.179: INFO: Pod "metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.306244637s [1mSTEP[0m: Saw pod success Jun 21 20:40:57.179: INFO: Pod "metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c" satisfied condition "Succeeded or Failed" Jun 21 20:40:57.277: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:40:57.593: INFO: Waiting for pod metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c to disappear Jun 21 20:40:57.710: INFO: Pod metadata-volume-024cbc35-1922-40d0-953b-f761a8089b2c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.850 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":93,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:40:57.921: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 41 lines ... [32m• [SLOW TEST:6.465 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be submitted and removed [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:55.066 seconds][0m [sig-apps] CronJob [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should be able to schedule after more than 100 missed schedule [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":3,"skipped":27,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:02.285: INFO: Only supported for providers [gce gke] (not aws) ... skipping 165 lines ... Jun 21 20:40:50.387: INFO: Received response from host: affinity-clusterip-timeout-klk9w Jun 21 20:40:50.387: INFO: Received response from host: affinity-clusterip-timeout-klk9w Jun 21 20:40:50.387: INFO: Received response from host: affinity-clusterip-timeout-klk9w Jun 21 20:40:50.387: INFO: Received response from host: affinity-clusterip-timeout-pgk7k Jun 21 20:40:50.387: INFO: Received response from host: affinity-clusterip-timeout-2zrls Jun 21 20:40:50.387: INFO: [affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-klk9w affinity-clusterip-timeout-pgk7k affinity-clusterip-timeout-2zrls] Jun 21 20:40:50.387: FAIL: Affinity should hold but didn't. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity({0x78e7e70, 0xc001cfd500}, 0x0, {0xc0046ad510, 0x6eec963}, 0x10, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:209 +0x1b7 k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0010a4160, {0x78e7e70, 0xc001cfd500}, 0xc00461c000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2880 +0x805 ... skipping 41 lines ... Jun 21 20:41:00.071: INFO: At 2022-06-21 20:38:36 +0000 UTC - event for execpod-affinitysckcr: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Started: Started container agnhost-container Jun 21 20:41:00.071: INFO: At 2022-06-21 20:38:36 +0000 UTC - event for execpod-affinitysckcr: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Created: Created container agnhost-container Jun 21 20:41:00.071: INFO: At 2022-06-21 20:40:50 +0000 UTC - event for execpod-affinitysckcr: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Killing: Stopping container agnhost-container Jun 21 20:41:00.071: INFO: At 2022-06-21 20:40:51 +0000 UTC - event for affinity-clusterip-timeout-2zrls: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Killing: Stopping container affinity-clusterip-timeout Jun 21 20:41:00.071: INFO: At 2022-06-21 20:40:51 +0000 UTC - event for affinity-clusterip-timeout-klk9w: {kubelet ip-172-20-0-54.eu-west-2.compute.internal} Killing: Stopping container affinity-clusterip-timeout Jun 21 20:41:00.071: INFO: At 2022-06-21 20:40:51 +0000 UTC - event for affinity-clusterip-timeout-pgk7k: {kubelet ip-172-20-0-148.eu-west-2.compute.internal} Killing: Stopping container affinity-clusterip-timeout Jun 21 20:41:00.072: INFO: At 2022-06-21 20:40:53 +0000 UTC - event for affinity-clusterip-timeout-klk9w: {kubelet ip-172-20-0-54.eu-west-2.compute.internal} FailedKillPod: error killing pod: failed to "KillContainer" for "affinity-clusterip-timeout" with KillContainerError: "rpc error: code = NotFound desc = an error occurred when try to find container \"b31527a56c870250e3e0d6a207a6d56692ce7ea8b46aeedce8646290ce074686\": not found" Jun 21 20:41:00.072: INFO: At 2022-06-21 20:40:54 +0000 UTC - event for affinity-clusterip-timeout-2zrls: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} FailedKillPod: error killing pod: failed to "KillContainer" for "affinity-clusterip-timeout" with KillContainerError: "rpc error: code = NotFound desc = an error occurred when try to find container \"8f6ac6c707541ff15364a4c55f54116a93c412c2b1062e713273a90f71f48685\": not found" Jun 21 20:41:00.174: INFO: POD NODE PHASE GRACE CONDITIONS Jun 21 20:41:00.174: INFO: Jun 21 20:41:00.274: INFO: Logging node info for node ip-172-20-0-148.eu-west-2.compute.internal Jun 21 20:41:00.371: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-0-148.eu-west-2.compute.internal 137c1a40-ae58-475b-8361-fb0a3e092630 15505 0 2022-06-21 20:33:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-0-148.eu-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:ip-172-20-0-148.eu-west-2.compute.internal topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-8103":"ip-172-20-0-148.eu-west-2.compute.internal","csi-mock-csi-mock-volumes-2001":"csi-mock-csi-mock-volumes-2001","ebs.csi.aws.com":"i-0a740318a9456a046"} io.cilium.network.ipv4-cilium-host:100.96.8.243 io.cilium.network.ipv4-health-ip:100.96.8.180 io.cilium.network.ipv4-pod-cidr:100.96.8.0/24 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{Go-http-client Update v1 2022-06-21 20:33:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2022-06-21 20:33:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-06-21 20:33:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.8.0/24\"":{}}}} } {cilium-agent Update v1 2022-06-21 20:33:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:io.cilium.network.ipv4-cilium-host":{},"f:io.cilium.network.ipv4-health-ip":{},"f:io.cilium.network.ipv4-pod-cidr":{}}}} } {cilium-agent Update v1 2022-06-21 20:33:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-06-21 20:40:48 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {Go-http-client Update v1 2022-06-21 20:40:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.8.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-0a740318a9456a046,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.8.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133167038464 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4064743424 0} {<nil>} 3969476Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119850334420 0} {<nil>} 119850334420 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959885824 0} {<nil>} 3867076Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-21 20:33:21 +0000 UTC,LastTransitionTime:2022-06-21 20:33:21 +0000 UTC,Reason:CiliumIsUp,Message:Cilium is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-21 20:40:50 +0000 UTC,LastTransitionTime:2022-06-21 20:33:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-21 20:40:50 +0000 UTC,LastTransitionTime:2022-06-21 20:33:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-21 20:40:50 +0000 UTC,LastTransitionTime:2022-06-21 20:33:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-21 20:40:50 +0000 UTC,LastTransitionTime:2022-06-21 20:33:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.148,},NodeAddress{Type:ExternalIP,Address:35.176.67.196,},NodeAddress{Type:Hostname,Address:ip-172-20-0-148.eu-west-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-0-148.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-176-67-196.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c257865a5b2a4bef8ab9f133ed7c7,SystemUUID:ec2c2578-65a5-b2a4-bef8-ab9f133ed7c7,BootID:91a56b7a-ccd3-47cd-acff-dcabbb2efd60,KernelVersion:5.4.0-1029-aws,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.23.1,KubeProxyVersion:v1.23.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:0612218e28288db360c63677c09fafa2d17edda4f13867bcabf87056046b33bb quay.io/cilium/cilium:v1.10.5],SizeBytes:149643860,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:ddd1b2e650ce5a10b3f5e9ae706cc384fc7e1a15940e07bba75f27369bc6a1ac k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.5.0],SizeBytes:114728287,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:49628485,},ContainerImage{Names:[k8s.gcr.io/dns/k8s-dns-node-cache@sha256:94f4b59b3b85a38ada50c0772b67a23877a19b30b64e0313e6e81ebcf5cd7e91 k8s.gcr.io/dns/k8s-dns-node-cache:1.21.3],SizeBytes:42475608,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:e40f3a28721588affcf187f3f246d1e078157dabe274003eaa2957a83f7170c8 k8s.gcr.io/kube-proxy:v1.23.1],SizeBytes:39272869,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[quay.io/cilium/hubble-relay@sha256:5d83c9d674e01c449f7fa65f176f2bde6568498acb726f5fe25cc12149c216c5 quay.io/cilium/hubble-relay:v1.10.5],SizeBytes:10737331,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-030356ac36334b8ee],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-030356ac36334b8ee,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8103^73a2d6b2-f1a2-11ec-a0b7-86c5da0d07f8,DevicePath:,},},Config:nil,},} Jun 21 20:41:00.372: INFO: ... skipping 310 lines ... [32m• [SLOW TEST:81.385 seconds][0m [sig-apps] CronJob [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should delete successful finished jobs with limit of one successful job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:278[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":6,"skipped":41,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:05.979: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 38 lines ... [32m• [SLOW TEST:29.480 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a configMap. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:06.029: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 55 lines ... [36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:249 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":8,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:05.965: INFO: >>> kubeConfig: /root/.kube/config ... skipping 30 lines ... Jun 21 20:41:02.291: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test override command Jun 21 20:41:02.877: INFO: Waiting up to 5m0s for pod "client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4" in namespace "containers-9443" to be "Succeeded or Failed" Jun 21 20:41:02.973: INFO: Pod "client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4": Phase="Pending", Reason="", readiness=false. Elapsed: 96.101608ms Jun 21 20:41:05.069: INFO: Pod "client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192164683s Jun 21 20:41:07.166: INFO: Pod "client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289109432s [1mSTEP[0m: Saw pod success Jun 21 20:41:07.166: INFO: Pod "client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4" satisfied condition "Succeeded or Failed" Jun 21 20:41:07.262: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:07.535: INFO: Waiting for pod client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4 to disappear Jun 21 20:41:07.636: INFO: Pod client-containers-0147a9f5-6c2c-4dca-8452-0707848b30a4 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.569 seconds][0m [sig-node] Docker Containers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:07.881: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: emptydir] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver emptydir doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 9 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating pod pod-subpath-test-secret-lbxs [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 20:40:37.812: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lbxs" in namespace "subpath-8420" to be "Succeeded or Failed" Jun 21 20:40:37.914: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 102.001716ms Jun 21 20:40:40.018: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205924668s Jun 21 20:40:42.118: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 4.305924024s Jun 21 20:40:44.219: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 6.406252965s Jun 21 20:40:46.317: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 8.504099635s Jun 21 20:40:48.414: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 10.601264008s ... skipping 4 lines ... Jun 21 20:40:58.929: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 21.11694915s Jun 21 20:41:01.043: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 23.230675956s Jun 21 20:41:03.146: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 25.334038745s Jun 21 20:41:05.248: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Running", Reason="", readiness=true. Elapsed: 27.435259267s Jun 21 20:41:07.347: INFO: Pod "pod-subpath-test-secret-lbxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.534142006s [1mSTEP[0m: Saw pod success Jun 21 20:41:07.347: INFO: Pod "pod-subpath-test-secret-lbxs" satisfied condition "Succeeded or Failed" Jun 21 20:41:07.487: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-secret-lbxs container test-container-subpath-secret-lbxs: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:07.727: INFO: Waiting for pod pod-subpath-test-secret-lbxs to disappear Jun 21 20:41:07.830: INFO: Pod pod-subpath-test-secret-lbxs no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-secret-lbxs Jun 21 20:41:07.830: INFO: Deleting pod "pod-subpath-test-secret-lbxs" in namespace "subpath-8420" ... skipping 8 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:08.137: INFO: Only supported for providers [gce gke] (not aws) ... skipping 193 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:41:09.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "discovery-1369" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":7,"skipped":45,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:09.483: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 38 lines ... [32m• [SLOW TEST:58.293 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m updates the published spec when one version gets renamed [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:09.496: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 68 lines ... [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:09.521: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192 Jun 21 20:41:10.112: INFO: found topology map[topology.kubernetes.io/zone:eu-west-2a] Jun 21 20:41:10.112: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 21 20:41:10.112: INFO: Not enough topologies in cluster -- skipping [1mSTEP[0m: Deleting pvc [1mSTEP[0m: Deleting sc ... skipping 7 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: aws] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [It][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mNot enough topologies in cluster -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199 [90m------------------------------[0m ... skipping 24 lines ... [32m• [SLOW TEST:22.140 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:11.707: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 53 lines ... Jun 21 20:41:07.888: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 21 20:41:08.474: INFO: Waiting up to 5m0s for pod "downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16" in namespace "downward-api-3731" to be "Succeeded or Failed" Jun 21 20:41:08.573: INFO: Pod "downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16": Phase="Pending", Reason="", readiness=false. Elapsed: 98.763902ms Jun 21 20:41:10.673: INFO: Pod "downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198785418s Jun 21 20:41:12.778: INFO: Pod "downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.303074541s [1mSTEP[0m: Saw pod success Jun 21 20:41:12.778: INFO: Pod "downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16" satisfied condition "Succeeded or Failed" Jun 21 20:41:12.891: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:13.097: INFO: Waiting for pod downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16 to disappear Jun 21 20:41:13.199: INFO: Pod downward-api-ec27417e-2cff-4b8a-a425-cb2865f27f16 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.507 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide pod UID as env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":53,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 98 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":16,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:15.193: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 43 lines ... Jun 21 20:41:08.170: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0666 on tmpfs Jun 21 20:41:08.763: INFO: Waiting up to 5m0s for pod "pod-5fa2e17f-ec77-443b-b15e-10f1aa570794" in namespace "emptydir-5257" to be "Succeeded or Failed" Jun 21 20:41:08.861: INFO: Pod "pod-5fa2e17f-ec77-443b-b15e-10f1aa570794": Phase="Pending", Reason="", readiness=false. Elapsed: 98.027925ms Jun 21 20:41:10.958: INFO: Pod "pod-5fa2e17f-ec77-443b-b15e-10f1aa570794": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195553781s Jun 21 20:41:13.063: INFO: Pod "pod-5fa2e17f-ec77-443b-b15e-10f1aa570794": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300693123s Jun 21 20:41:15.171: INFO: Pod "pod-5fa2e17f-ec77-443b-b15e-10f1aa570794": Phase="Pending", Reason="", readiness=false. Elapsed: 6.407993367s Jun 21 20:41:17.276: INFO: Pod "pod-5fa2e17f-ec77-443b-b15e-10f1aa570794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.513333679s [1mSTEP[0m: Saw pod success Jun 21 20:41:17.276: INFO: Pod "pod-5fa2e17f-ec77-443b-b15e-10f1aa570794" satisfied condition "Succeeded or Failed" Jun 21 20:41:17.380: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-5fa2e17f-ec77-443b-b15e-10f1aa570794 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:17.595: INFO: Waiting for pod pod-5fa2e17f-ec77-443b-b15e-10f1aa570794 to disappear Jun 21 20:41:17.698: INFO: Pod pod-5fa2e17f-ec77-443b-b15e-10f1aa570794 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.732 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":68,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:17.906: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 105 lines ... [32m• [SLOW TEST:38.103 seconds][0m [sig-storage] PVC Protection [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Verify "immediate" deletion of a PVC that is not in active use by a pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":5,"skipped":35,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-autoscaling] DNS horizontal autoscaling /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 47 lines ... Jun 21 20:41:11.359: INFO: PersistentVolumeClaim pvc-2dc9k found but phase is Pending instead of Bound. Jun 21 20:41:13.457: INFO: PersistentVolumeClaim pvc-2dc9k found and phase=Bound (2.194380196s) Jun 21 20:41:13.457: INFO: Waiting up to 3m0s for PersistentVolume local-t8l6k to have phase Bound Jun 21 20:41:13.554: INFO: PersistentVolume local-t8l6k found and phase=Bound (96.864374ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-lzk5 [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:41:13.849: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lzk5" in namespace "provisioning-422" to be "Succeeded or Failed" Jun 21 20:41:13.947: INFO: Pod "pod-subpath-test-preprovisionedpv-lzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 97.50648ms Jun 21 20:41:16.045: INFO: Pod "pod-subpath-test-preprovisionedpv-lzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19585557s Jun 21 20:41:18.143: INFO: Pod "pod-subpath-test-preprovisionedpv-lzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293908331s Jun 21 20:41:20.241: INFO: Pod "pod-subpath-test-preprovisionedpv-lzk5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.392280146s [1mSTEP[0m: Saw pod success Jun 21 20:41:20.241: INFO: Pod "pod-subpath-test-preprovisionedpv-lzk5" satisfied condition "Succeeded or Failed" Jun 21 20:41:20.343: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-lzk5 container test-container-subpath-preprovisionedpv-lzk5: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:20.572: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lzk5 to disappear Jun 21 20:41:20.669: INFO: Pod pod-subpath-test-preprovisionedpv-lzk5 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-lzk5 Jun 21 20:41:20.669: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lzk5" in namespace "provisioning-422" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:23.445: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 14 lines ... [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:14.663: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs Jun 21 20:41:15.288: INFO: Waiting up to 5m0s for pod "pod-4bc3969f-9513-420e-bfcd-de6a95e92221" in namespace "emptydir-6606" to be "Succeeded or Failed" Jun 21 20:41:15.385: INFO: Pod "pod-4bc3969f-9513-420e-bfcd-de6a95e92221": Phase="Pending", Reason="", readiness=false. Elapsed: 97.18238ms Jun 21 20:41:17.488: INFO: Pod "pod-4bc3969f-9513-420e-bfcd-de6a95e92221": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200368261s Jun 21 20:41:19.585: INFO: Pod "pod-4bc3969f-9513-420e-bfcd-de6a95e92221": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297780508s Jun 21 20:41:21.683: INFO: Pod "pod-4bc3969f-9513-420e-bfcd-de6a95e92221": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394949918s Jun 21 20:41:23.780: INFO: Pod "pod-4bc3969f-9513-420e-bfcd-de6a95e92221": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.49273896s [1mSTEP[0m: Saw pod success Jun 21 20:41:23.780: INFO: Pod "pod-4bc3969f-9513-420e-bfcd-de6a95e92221" satisfied condition "Succeeded or Failed" Jun 21 20:41:23.877: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-4bc3969f-9513-420e-bfcd-de6a95e92221 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:24.084: INFO: Waiting for pod pod-4bc3969f-9513-420e-bfcd-de6a95e92221 to disappear Jun 21 20:41:24.180: INFO: Pod pod-4bc3969f-9513-420e-bfcd-de6a95e92221 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.715 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:24.383: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 74 lines ... [32m• [SLOW TEST:12.503 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should support configurable pod DNS nameservers [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:24.965: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 50 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:41:24.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-9395" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":17,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:25.142: INFO: Only supported for providers [vsphere] (not aws) ... skipping 24 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-0721c073-1d71-480d-9ceb-e342e0b761bc [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 20:41:15.905: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2" in namespace "projected-9587" to be "Succeeded or Failed" Jun 21 20:41:16.005: INFO: Pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2": Phase="Pending", Reason="", readiness=false. Elapsed: 99.461768ms Jun 21 20:41:18.102: INFO: Pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196792758s Jun 21 20:41:20.200: INFO: Pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29428419s Jun 21 20:41:22.297: INFO: Pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391519043s Jun 21 20:41:24.395: INFO: Pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48951471s Jun 21 20:41:26.496: INFO: Pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591015909s [1mSTEP[0m: Saw pod success Jun 21 20:41:26.496: INFO: Pod "pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2" satisfied condition "Succeeded or Failed" Jun 21 20:41:26.594: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:26.796: INFO: Waiting for pod pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2 to disappear Jun 21 20:41:26.892: INFO: Pod pod-projected-secrets-291068bc-3a82-4f95-9eb8-30777b0228c2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:11.885 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":22,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:27.092: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 45 lines ... [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 [1mSTEP[0m: Creating configMap with name configmap-test-volume-fc18f19a-5b84-466f-a8a3-77e26fbb3373 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:41:14.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858" in namespace "configmap-923" to be "Succeeded or Failed" Jun 21 20:41:14.193: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858": Phase="Pending", Reason="", readiness=false. Elapsed: 104.033951ms Jun 21 20:41:16.289: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200107971s Jun 21 20:41:18.386: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297107858s Jun 21 20:41:20.484: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394975821s Jun 21 20:41:22.581: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491666404s Jun 21 20:41:24.678: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858": Phase="Pending", Reason="", readiness=false. Elapsed: 10.588679311s Jun 21 20:41:26.774: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.685138036s [1mSTEP[0m: Saw pod success Jun 21 20:41:26.774: INFO: Pod "pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858" satisfied condition "Succeeded or Failed" Jun 21 20:41:26.870: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:27.082: INFO: Waiting for pod pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858 to disappear Jun 21 20:41:27.177: INFO: Pod pod-configmaps-ef0691af-c6c1-40dc-8e6a-ce68078eb858 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:13.974 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":56,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:27.385: INFO: Only supported for providers [openstack] (not aws) ... skipping 41 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:41:28.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-3986" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:28.575: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 108 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":13,"skipped":101,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:28.982: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping ... skipping 147 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:41:29.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-7531" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":8,"skipped":35,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m When pod refers to non-existent ephemeral storage [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53[0m should allow deletion of pod with invalid volume : secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":4,"skipped":37,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:29.596: INFO: Only supported for providers [vsphere] (not aws) ... skipping 51 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: azure-disk] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [azure] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567 [90m------------------------------[0m ... skipping 75 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-f6ced8f7-a506-44cb-9f52-02ff3642c245 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:41:25.701: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875" in namespace "projected-5383" to be "Succeeded or Failed" Jun 21 20:41:25.800: INFO: Pod "pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875": Phase="Pending", Reason="", readiness=false. Elapsed: 99.449112ms Jun 21 20:41:27.900: INFO: Pod "pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198881964s Jun 21 20:41:30.010: INFO: Pod "pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308657968s [1mSTEP[0m: Saw pod success Jun 21 20:41:30.010: INFO: Pod "pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875" satisfied condition "Succeeded or Failed" Jun 21 20:41:30.108: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:30.314: INFO: Waiting for pod pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875 to disappear Jun 21 20:41:30.414: INFO: Pod pod-projected-configmaps-89d979f2-5b1c-49ce-b435-1c7563149875 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.643 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":56,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 261 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":49,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:31.529: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 101 lines ... [36mDriver local doesn't support ext3 -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":7,"skipped":68,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:30.873: INFO: >>> kubeConfig: /root/.kube/config ... skipping 87 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":78,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":31,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:31.368: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename replication-controller [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 18 lines ... [32m• [SLOW TEST:9.372 seconds][0m [sig-apps] ReplicationController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should adopt matching pods on creation [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:74 [1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs Jun 21 20:41:29.945: INFO: Waiting up to 5m0s for pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257" in namespace "emptydir-7348" to be "Succeeded or Failed" Jun 21 20:41:30.051: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257": Phase="Pending", Reason="", readiness=false. Elapsed: 105.695259ms Jun 21 20:41:32.151: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206205076s Jun 21 20:41:34.249: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304369342s Jun 21 20:41:36.349: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404143773s Jun 21 20:41:38.446: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501451414s Jun 21 20:41:40.544: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257": Phase="Pending", Reason="", readiness=false. Elapsed: 10.599450531s Jun 21 20:41:42.642: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.697519028s [1mSTEP[0m: Saw pod success Jun 21 20:41:42.642: INFO: Pod "pod-88d308c8-bae0-46a2-87df-7c18abccc257" satisfied condition "Succeeded or Failed" Jun 21 20:41:42.739: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-88d308c8-bae0-46a2-87df-7c18abccc257 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:42.947: INFO: Waiting for pod pod-88d308c8-bae0-46a2-87df-7c18abccc257 to disappear Jun 21 20:41:43.047: INFO: Pod pod-88d308c8-bae0-46a2-87df-7c18abccc257 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:47[0m volume on tmpfs should have the correct mode using FSGroup [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:74[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":37,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:27.390: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-bdfa6f3d-d73a-4310-8fe9-cc84ed0ee972 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:41:28.073: INFO: Waiting up to 5m0s for pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd" in namespace "configmap-4109" to be "Succeeded or Failed" Jun 21 20:41:28.169: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Pending", Reason="", readiness=false. Elapsed: 95.991059ms Jun 21 20:41:30.267: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193733842s Jun 21 20:41:32.363: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289606659s Jun 21 20:41:34.459: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386164706s Jun 21 20:41:36.556: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482616051s Jun 21 20:41:38.656: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.583135491s Jun 21 20:41:40.756: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Running", Reason="", readiness=true. Elapsed: 12.682845153s Jun 21 20:41:42.855: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.781693643s [1mSTEP[0m: Saw pod success Jun 21 20:41:42.855: INFO: Pod "pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd" satisfied condition "Succeeded or Failed" Jun 21 20:41:42.955: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:43.199: INFO: Waiting for pod pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd to disappear Jun 21 20:41:43.295: INFO: Pod pod-configmaps-48d23e16-3c61-4dbb-b8a9-59a6add039dd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:16.102 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":66,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:40.745: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-3cb4c139-9a78-4c74-9fa4-41c8b22ad5de [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 20:41:41.453: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e652766c-b8d7-4f44-a503-256f93462b73" in namespace "projected-3681" to be "Succeeded or Failed" Jun 21 20:41:41.551: INFO: Pod "pod-projected-secrets-e652766c-b8d7-4f44-a503-256f93462b73": Phase="Pending", Reason="", readiness=false. Elapsed: 97.812033ms Jun 21 20:41:43.649: INFO: Pod "pod-projected-secrets-e652766c-b8d7-4f44-a503-256f93462b73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195696809s [1mSTEP[0m: Saw pod success Jun 21 20:41:43.649: INFO: Pod "pod-projected-secrets-e652766c-b8d7-4f44-a503-256f93462b73" satisfied condition "Succeeded or Failed" Jun 21 20:41:43.747: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-projected-secrets-e652766c-b8d7-4f44-a503-256f93462b73 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:43.959: INFO: Waiting for pod pod-projected-secrets-e652766c-b8d7-4f44-a503-256f93462b73 to disappear Jun 21 20:41:44.057: INFO: Pod pod-projected-secrets-e652766c-b8d7-4f44-a503-256f93462b73 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:41:44.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3681" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":33,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:30.628: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 21 20:41:31.272: INFO: Waiting up to 5m0s for pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0" in namespace "security-context-3918" to be "Succeeded or Failed" Jun 21 20:41:31.381: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0": Phase="Pending", Reason="", readiness=false. Elapsed: 109.034949ms Jun 21 20:41:33.480: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207696335s Jun 21 20:41:35.618: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345333374s Jun 21 20:41:37.719: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446797992s Jun 21 20:41:39.824: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551755504s Jun 21 20:41:41.929: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656842305s Jun 21 20:41:44.028: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.755440448s [1mSTEP[0m: Saw pod success Jun 21 20:41:44.028: INFO: Pod "security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0" satisfied condition "Succeeded or Failed" Jun 21 20:41:44.134: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:44.399: INFO: Waiting for pod security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0 to disappear Jun 21 20:41:44.505: INFO: Pod security-context-bf0839a7-e8cd-4c24-a8a4-a1359e2470a0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.077 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:44.711: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 77 lines ... [1mSTEP[0m: Destroying namespace "apply-9753" for this suite. [AfterEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":8,"skipped":68,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... [1mSTEP[0m: Destroying namespace "services-7881" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":9,"skipped":71,"failed":0} [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:45.947: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename runtimeclass [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 3 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:41:46.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-7635" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":10,"skipped":71,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:46.855: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping ... skipping 112 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: windows-gcepd] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m ... skipping 57 lines ... [32m• [SLOW TEST:8.200 seconds][0m [sig-node] PrivilegedPod [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should enable privileged commands [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":10,"skipped":81,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:47.822: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 23 lines ... [sig-storage] CSI Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver "csi-hostpath" does not support topology - skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92 [90m------------------------------[0m ... skipping 84 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when scheduling a busybox command that always fails in a pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79[0m should have an terminated reason [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":39,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:56.472: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 79 lines ... [32m• [SLOW TEST:9.826 seconds][0m [sig-apps] ReplicaSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should validate Replicaset Status endpoints [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":11,"skipped":87,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:41:57.669: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 46 lines ... Jun 21 20:41:41.629: INFO: PersistentVolumeClaim pvc-npj86 found but phase is Pending instead of Bound. Jun 21 20:41:43.727: INFO: PersistentVolumeClaim pvc-npj86 found and phase=Bound (14.806824114s) Jun 21 20:41:43.727: INFO: Waiting up to 3m0s for PersistentVolume local-hfq6g to have phase Bound Jun 21 20:41:43.824: INFO: PersistentVolume local-hfq6g found and phase=Bound (96.986364ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-q52z [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:41:44.136: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q52z" in namespace "provisioning-9997" to be "Succeeded or Failed" Jun 21 20:41:44.270: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Pending", Reason="", readiness=false. Elapsed: 133.24573ms Jun 21 20:41:46.367: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230930786s Jun 21 20:41:48.466: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329370324s Jun 21 20:41:50.571: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434560123s Jun 21 20:41:52.669: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532997704s Jun 21 20:41:54.769: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.632592947s Jun 21 20:41:56.868: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.732022178s Jun 21 20:41:58.966: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.829794725s [1mSTEP[0m: Saw pod success Jun 21 20:41:58.966: INFO: Pod "pod-subpath-test-preprovisionedpv-q52z" satisfied condition "Succeeded or Failed" Jun 21 20:41:59.064: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-q52z container test-container-subpath-preprovisionedpv-q52z: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:59.296: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q52z to disappear Jun 21 20:41:59.394: INFO: Pod pod-subpath-test-preprovisionedpv-q52z no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-q52z Jun 21 20:41:59.394: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q52z" in namespace "provisioning-9997" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":31,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when scheduling a busybox command in a pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41[0m should print the output to logs [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":46,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 28 lines ... Jun 21 20:41:41.717: INFO: PersistentVolumeClaim pvc-9lfvg found but phase is Pending instead of Bound. Jun 21 20:41:43.816: INFO: PersistentVolumeClaim pvc-9lfvg found and phase=Bound (14.813103701s) Jun 21 20:41:43.816: INFO: Waiting up to 3m0s for PersistentVolume local-8cp69 to have phase Bound Jun 21 20:41:43.912: INFO: PersistentVolume local-8cp69 found and phase=Bound (96.556633ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-7z87 [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:41:44.239: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7z87" in namespace "provisioning-8107" to be "Succeeded or Failed" Jun 21 20:41:44.346: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Pending", Reason="", readiness=false. Elapsed: 107.416028ms Jun 21 20:41:46.445: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206149866s Jun 21 20:41:48.553: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314229518s Jun 21 20:41:50.650: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411479919s Jun 21 20:41:52.748: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.508845087s Jun 21 20:41:54.849: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.610319778s Jun 21 20:41:56.948: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Pending", Reason="", readiness=false. Elapsed: 12.709363609s Jun 21 20:41:59.053: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.814489163s [1mSTEP[0m: Saw pod success Jun 21 20:41:59.053: INFO: Pod "pod-subpath-test-preprovisionedpv-7z87" satisfied condition "Succeeded or Failed" Jun 21 20:41:59.151: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-7z87 container test-container-volume-preprovisionedpv-7z87: <nil> [1mSTEP[0m: delete the pod Jun 21 20:41:59.374: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7z87 to disappear Jun 21 20:41:59.475: INFO: Pod pod-subpath-test-preprovisionedpv-7z87 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-7z87 Jun 21 20:41:59.475: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7z87" in namespace "provisioning-8107" ... skipping 30 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":40,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:02.657: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 61 lines ... [32m• [SLOW TEST:37.669 seconds][0m [sig-network] EndpointSlice [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:09.257: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 131 lines ... [32m• [SLOW TEST:7.597 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for the cluster [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... [32m• [SLOW TEST:12.178 seconds][0m [sig-auth] Certificates API [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should support building a client with a CSR [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:57[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":4,"skipped":33,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:13.307: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 83 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:42:14.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-1128" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:14.604: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 91 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m Verify if offline PVC expansion works [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":9,"skipped":70,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 61 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-839b780d-d7e5-45c7-8fb8-6f00e9106e92 [1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-5af78aca-fc4e-429b-b63a-e4500eb55a68 [1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin Jun 21 20:42:14.106: INFO: Waiting up to 5m0s for pod "projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51" in namespace "projected-4366" to be "Succeeded or Failed" Jun 21 20:42:14.208: INFO: Pod "projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51": Phase="Pending", Reason="", readiness=false. Elapsed: 102.093423ms Jun 21 20:42:16.313: INFO: Pod "projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206814662s Jun 21 20:42:18.412: INFO: Pod "projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305533889s [1mSTEP[0m: Saw pod success Jun 21 20:42:18.412: INFO: Pod "projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51" satisfied condition "Succeeded or Failed" Jun 21 20:42:18.513: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51 container projected-all-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 20:42:18.718: INFO: Waiting for pod projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51 to disappear Jun 21 20:42:18.816: INFO: Pod projected-volume-fd90f7d4-a0de-42cd-b54f-536af7723e51 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.700 seconds][0m [sig-storage] Projected combined [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should project all components that make up the projection API [Projection][NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes ... skipping 61 lines ... [1mSTEP[0m: Deleting pod aws-client in namespace volume-9806 Jun 21 20:42:05.857: INFO: Waiting for pod aws-client to disappear Jun 21 20:42:05.972: INFO: Pod aws-client still exists Jun 21 20:42:07.972: INFO: Waiting for pod aws-client to disappear Jun 21 20:42:08.069: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws Jun 21 20:42:08.245: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0054c34e88d728829", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0054c34e88d728829 is currently attached to i-0a54d9ce3df6ebe23 status code: 400, request id: f9f4b3dc-5be5-4a9d-a272-268af7a9875b Jun 21 20:42:13.759: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0054c34e88d728829", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0054c34e88d728829 is currently attached to i-0a54d9ce3df6ebe23 status code: 400, request id: 730487c0-c23a-424d-a56d-2c5105cf3fc6 Jun 21 20:42:19.368: INFO: Successfully deleted PD "aws://eu-west-2a/vol-0054c34e88d728829". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:42:19.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-9806" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":6,"skipped":41,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:42:19.019: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 21 20:42:19.609: INFO: Waiting up to 5m0s for pod "security-context-bac17dde-8e1f-44a1-af22-041d3b83c8e2" in namespace "security-context-594" to be "Succeeded or Failed" Jun 21 20:42:19.706: INFO: Pod "security-context-bac17dde-8e1f-44a1-af22-041d3b83c8e2": Phase="Pending", Reason="", readiness=false. Elapsed: 97.170558ms Jun 21 20:42:21.805: INFO: Pod "security-context-bac17dde-8e1f-44a1-af22-041d3b83c8e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195711145s [1mSTEP[0m: Saw pod success Jun 21 20:42:21.805: INFO: Pod "security-context-bac17dde-8e1f-44a1-af22-041d3b83c8e2" satisfied condition "Succeeded or Failed" Jun 21 20:42:21.906: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod security-context-bac17dde-8e1f-44a1-af22-041d3b83c8e2 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:42:22.123: INFO: Waiting for pod security-context-bac17dde-8e1f-44a1-af22-041d3b83c8e2 to disappear Jun 21 20:42:22.221: INFO: Pod security-context-bac17dde-8e1f-44a1-af22-041d3b83c8e2 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 13 lines ... [It] should support readOnly file specified in the volumeMount [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380 Jun 21 20:42:15.097: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 21 20:42:15.204: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-7pjt [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:42:15.308: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7pjt" in namespace "provisioning-5689" to be "Succeeded or Failed" Jun 21 20:42:15.405: INFO: Pod "pod-subpath-test-inlinevolume-7pjt": Phase="Pending", Reason="", readiness=false. Elapsed: 96.684542ms Jun 21 20:42:17.504: INFO: Pod "pod-subpath-test-inlinevolume-7pjt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195286651s Jun 21 20:42:19.602: INFO: Pod "pod-subpath-test-inlinevolume-7pjt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293732239s Jun 21 20:42:21.700: INFO: Pod "pod-subpath-test-inlinevolume-7pjt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.391976566s [1mSTEP[0m: Saw pod success Jun 21 20:42:21.701: INFO: Pod "pod-subpath-test-inlinevolume-7pjt" satisfied condition "Succeeded or Failed" Jun 21 20:42:21.798: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-7pjt container test-container-subpath-inlinevolume-7pjt: <nil> [1mSTEP[0m: delete the pod Jun 21 20:42:22.017: INFO: Waiting for pod pod-subpath-test-inlinevolume-7pjt to disappear Jun 21 20:42:22.113: INFO: Pod pod-subpath-test-inlinevolume-7pjt no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-7pjt Jun 21 20:42:22.113: INFO: Deleting pod "pod-subpath-test-inlinevolume-7pjt" in namespace "provisioning-5689" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":56,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 37 lines ... [32m• [SLOW TEST:28.563 seconds][0m [sig-network] Conntrack [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to preserve UDP traffic when server pod cycles for a ClusterIP service [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:206[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":12,"skipped":51,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:30.261: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 62 lines ... [32m• [SLOW TEST:144.191 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should have monotonically increasing restart count [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 93 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":73,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:34.145: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 174 lines ... [32m• [SLOW TEST:63.622 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m iterative rollouts should eventually progress [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":9,"skipped":64,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning ... skipping 153 lines ... [1mSTEP[0m: Destroying namespace "apply-9737" for this suite. [AfterEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":11,"skipped":84,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:38.044: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping ... skipping 85 lines ... [1mSTEP[0m: Deleting pod verify-service-up-exec-pod-npnw6 in namespace services-9206 [1mSTEP[0m: verifying service-disabled is not up Jun 21 20:42:08.792: INFO: Creating new host exec pod Jun 21 20:42:08.990: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 21 20:42:11.088: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 21 20:42:13.091: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jun 21 20:42:13.091: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9206 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.87.7:80 && echo service-down-failed' Jun 21 20:42:16.351: INFO: rc: 28 Jun 21 20:42:16.351: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.87.7:80 && echo service-down-failed" in pod services-9206/verify-service-down-host-exec-pod: error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9206 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.87.7:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.70.87.7:80 command terminated with exit code 28 error: exit status 28 Output: [1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-9206 [1mSTEP[0m: adding service-proxy-name label [1mSTEP[0m: verifying service is not up Jun 21 20:42:16.673: INFO: Creating new host exec pod Jun 21 20:42:16.882: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 21 20:42:18.984: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jun 21 20:42:18.984: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9206 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.52.141:80 && echo service-down-failed' Jun 21 20:42:22.272: INFO: rc: 28 Jun 21 20:42:22.272: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.52.141:80 && echo service-down-failed" in pod services-9206/verify-service-down-host-exec-pod: error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9206 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.52.141:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.71.52.141:80 command terminated with exit code 28 error: exit status 28 Output: [1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-9206 [1mSTEP[0m: removing service-proxy-name annotation [1mSTEP[0m: verifying service is up Jun 21 20:42:22.689: INFO: Creating new host exec pod ... skipping 12 lines ... [1mSTEP[0m: Deleting pod verify-service-up-host-exec-pod in namespace services-9206 [1mSTEP[0m: Deleting pod verify-service-up-exec-pod-ln7fx in namespace services-9206 [1mSTEP[0m: verifying service-disabled is still not up Jun 21 20:42:33.849: INFO: Creating new host exec pod Jun 21 20:42:34.056: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jun 21 20:42:36.166: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jun 21 20:42:36.166: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9206 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.87.7:80 && echo service-down-failed' Jun 21 20:42:39.430: INFO: rc: 28 Jun 21 20:42:39.430: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.87.7:80 && echo service-down-failed" in pod services-9206/verify-service-down-host-exec-pod: error running /logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9206 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.87.7:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.70.87.7:80 command terminated with exit code 28 error: exit status 28 Output: [1mSTEP[0m: Deleting pod verify-service-down-host-exec-pod in namespace services-9206 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:42:39.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ... skipping 5 lines ... [32m• [SLOW TEST:70.163 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should implement service.kubernetes.io/service-proxy-name [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1889[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":5,"skipped":59,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:12.651 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate custom resource with different stored version [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":13,"skipped":53,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:42.922: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 93 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: hostPathSymlink] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 473 lines ... [32m• [SLOW TEST:13.118 seconds][0m [sig-network] Service endpoints latency [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should not be very high [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":8,"skipped":82,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:43.905: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 25 lines ... [AfterEach] [sig-api-machinery] client-go should negotiate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:42:44.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":9,"skipped":85,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:44.209: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 30 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:42:44.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-7102" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":14,"skipped":77,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:44.428: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: hostPathSymlink] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":6,"skipped":41,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:42:22.514: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename statefulset [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 109 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":91,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:47.853: INFO: Only supported for providers [openstack] (not aws) ... skipping 55 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:42:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "metrics-grabber-1573" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":11,"skipped":97,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:42:44.215: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-06445eff-ded1-41b6-8aed-78784f924880 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:42:44.920: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d" in namespace "projected-4703" to be "Succeeded or Failed" Jun 21 20:42:45.017: INFO: Pod "pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 96.854252ms Jun 21 20:42:47.115: INFO: Pod "pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194289824s Jun 21 20:42:49.214: INFO: Pod "pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293590394s [1mSTEP[0m: Saw pod success Jun 21 20:42:49.214: INFO: Pod "pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d" satisfied condition "Succeeded or Failed" Jun 21 20:42:49.322: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:42:49.561: INFO: Waiting for pod pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d to disappear Jun 21 20:42:49.658: INFO: Pod pod-projected-configmaps-0ba61b26-215f-4044-b4ff-3838273f5a6d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.643 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":10,"skipped":90,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:49.860: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 42 lines ... [32m• [SLOW TEST:83.351 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m updates should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":134,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:52.395: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 114 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":99,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 5 lines ... [It] should support existing single file [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219 Jun 21 20:42:50.051: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 21 20:42:50.150: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-kkk4 [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:42:50.251: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kkk4" in namespace "provisioning-3988" to be "Succeeded or Failed" Jun 21 20:42:50.348: INFO: Pod "pod-subpath-test-inlinevolume-kkk4": Phase="Pending", Reason="", readiness=false. Elapsed: 97.426325ms Jun 21 20:42:52.449: INFO: Pod "pod-subpath-test-inlinevolume-kkk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198364767s Jun 21 20:42:54.548: INFO: Pod "pod-subpath-test-inlinevolume-kkk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.296927859s [1mSTEP[0m: Saw pod success Jun 21 20:42:54.548: INFO: Pod "pod-subpath-test-inlinevolume-kkk4" satisfied condition "Succeeded or Failed" Jun 21 20:42:54.645: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-kkk4 container test-container-subpath-inlinevolume-kkk4: <nil> [1mSTEP[0m: delete the pod Jun 21 20:42:54.856: INFO: Waiting for pod pod-subpath-test-inlinevolume-kkk4 to disappear Jun 21 20:42:54.953: INFO: Pod pod-subpath-test-inlinevolume-kkk4 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-kkk4 Jun 21 20:42:54.953: INFO: Deleting pod "pod-subpath-test-inlinevolume-kkk4" in namespace "provisioning-3988" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":12,"skipped":103,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":7,"skipped":41,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:42:45.782: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename resourcequota [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 15 lines ... [32m• [SLOW TEST:12.441 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a replication controller. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":8,"skipped":41,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 17 lines ... Jun 21 20:42:26.550: INFO: PersistentVolumeClaim pvc-nhslr found but phase is Pending instead of Bound. Jun 21 20:42:28.649: INFO: PersistentVolumeClaim pvc-nhslr found and phase=Bound (2.195086041s) Jun 21 20:42:28.649: INFO: Waiting up to 3m0s for PersistentVolume local-rzbx4 to have phase Bound Jun 21 20:42:28.747: INFO: PersistentVolume local-rzbx4 found and phase=Bound (98.317228ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-57t8 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 20:42:29.040: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-57t8" in namespace "provisioning-2590" to be "Succeeded or Failed" Jun 21 20:42:29.137: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Pending", Reason="", readiness=false. Elapsed: 97.133917ms Jun 21 20:42:31.235: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195439674s Jun 21 20:42:33.333: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292933441s Jun 21 20:42:35.445: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405816443s Jun 21 20:42:37.546: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Running", Reason="", readiness=true. Elapsed: 8.506673115s Jun 21 20:42:39.647: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Running", Reason="", readiness=true. Elapsed: 10.607117541s ... skipping 3 lines ... Jun 21 20:42:48.062: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Running", Reason="", readiness=true. Elapsed: 19.022253086s Jun 21 20:42:50.161: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Running", Reason="", readiness=true. Elapsed: 21.121129247s Jun 21 20:42:52.266: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Running", Reason="", readiness=true. Elapsed: 23.225883431s Jun 21 20:42:54.362: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Running", Reason="", readiness=true. Elapsed: 25.322775832s Jun 21 20:42:56.479: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.439078466s [1mSTEP[0m: Saw pod success Jun 21 20:42:56.479: INFO: Pod "pod-subpath-test-preprovisionedpv-57t8" satisfied condition "Succeeded or Failed" Jun 21 20:42:56.605: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-57t8 container test-container-subpath-preprovisionedpv-57t8: <nil> [1mSTEP[0m: delete the pod Jun 21 20:42:56.864: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-57t8 to disappear Jun 21 20:42:56.973: INFO: Pod pod-subpath-test-preprovisionedpv-57t8 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-57t8 Jun 21 20:42:56.973: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-57t8" in namespace "provisioning-2590" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":9,"skipped":58,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:42:59.615: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 155 lines ... Jun 21 20:42:55.385: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 21 20:42:56.060: INFO: Waiting up to 5m0s for pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e" in namespace "security-context-4650" to be "Succeeded or Failed" Jun 21 20:42:56.191: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 131.188198ms Jun 21 20:42:58.288: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228654119s Jun 21 20:43:00.394: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334755919s Jun 21 20:43:02.494: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434586684s Jun 21 20:43:04.601: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541416956s Jun 21 20:43:06.707: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646936173s Jun 21 20:43:08.811: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.751117544s Jun 21 20:43:10.921: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.861492923s Jun 21 20:43:13.020: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.960138068s Jun 21 20:43:15.121: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.061054796s [1mSTEP[0m: Saw pod success Jun 21 20:43:15.121: INFO: Pod "security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e" satisfied condition "Succeeded or Failed" Jun 21 20:43:15.218: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:43:15.433: INFO: Waiting for pod security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e to disappear Jun 21 20:43:15.538: INFO: Pod security-context-f8e88452-87ee-4c9a-9cee-723e700ddf6e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:20.350 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support seccomp default which is unconfined [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":13,"skipped":104,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:37.348 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be ready immediately after startupProbe succeeds [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:408[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":15,"skipped":151,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:43:29.755: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 120 lines ... [32m• [SLOW TEST:34.654 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":13,"skipped":106,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 20:42:58.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249" in namespace "projected-6128" to be "Succeeded or Failed" Jun 21 20:42:58.936: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 96.973677ms Jun 21 20:43:01.033: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19433854s Jun 21 20:43:03.131: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292229085s Jun 21 20:43:05.229: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390152827s Jun 21 20:43:07.334: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495082888s Jun 21 20:43:09.434: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 10.594534604s ... skipping 7 lines ... Jun 21 20:43:26.262: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 27.422598047s Jun 21 20:43:28.361: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 29.521763685s Jun 21 20:43:30.459: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 31.619874835s Jun 21 20:43:32.557: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Pending", Reason="", readiness=false. Elapsed: 33.718196978s Jun 21 20:43:34.655: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.816015288s [1mSTEP[0m: Saw pod success Jun 21 20:43:34.655: INFO: Pod "downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249" satisfied condition "Succeeded or Failed" Jun 21 20:43:34.757: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:43:34.962: INFO: Waiting for pod downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249 to disappear Jun 21 20:43:35.059: INFO: Pod downwardapi-volume-88be062c-f346-49a5-a16b-5718eaebd249 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:37.013 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide podname only [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":55,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:43:35.258: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 21 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-555b9580-d7c6-4c55-88b4-0eb3fc85e418 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:43:00.339: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90" in namespace "projected-7842" to be "Succeeded or Failed" Jun 21 20:43:00.436: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 97.146132ms Jun 21 20:43:02.534: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195798559s Jun 21 20:43:04.638: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299587566s Jun 21 20:43:06.752: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413527127s Jun 21 20:43:08.850: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511604051s Jun 21 20:43:10.959: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 10.619855115s ... skipping 7 lines ... Jun 21 20:43:27.844: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 27.505077091s Jun 21 20:43:29.951: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 29.612064374s Jun 21 20:43:32.049: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 31.710673117s Jun 21 20:43:34.149: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 33.810558506s Jun 21 20:43:36.249: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.909817859s [1mSTEP[0m: Saw pod success Jun 21 20:43:36.249: INFO: Pod "pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90" satisfied condition "Succeeded or Failed" Jun 21 20:43:36.350: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:43:36.555: INFO: Waiting for pod pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90 to disappear Jun 21 20:43:36.657: INFO: Pod pod-projected-configmaps-b0650863-5d0d-46e6-b679-9678490e8d90 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:37.209 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":77,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:43:36.860: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 65 lines ... Jun 21 20:43:28.021: INFO: PersistentVolumeClaim pvc-sfdht found but phase is Pending instead of Bound. Jun 21 20:43:30.124: INFO: PersistentVolumeClaim pvc-sfdht found and phase=Bound (6.440661094s) Jun 21 20:43:30.124: INFO: Waiting up to 3m0s for PersistentVolume local-cxhb5 to have phase Bound Jun 21 20:43:30.224: INFO: PersistentVolume local-cxhb5 found and phase=Bound (99.62685ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-72th [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:43:30.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-72th" in namespace "provisioning-5298" to be "Succeeded or Failed" Jun 21 20:43:30.626: INFO: Pod "pod-subpath-test-preprovisionedpv-72th": Phase="Pending", Reason="", readiness=false. Elapsed: 96.967064ms Jun 21 20:43:32.723: INFO: Pod "pod-subpath-test-preprovisionedpv-72th": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194440921s Jun 21 20:43:34.821: INFO: Pod "pod-subpath-test-preprovisionedpv-72th": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292347259s Jun 21 20:43:36.918: INFO: Pod "pod-subpath-test-preprovisionedpv-72th": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389591761s [1mSTEP[0m: Saw pod success Jun 21 20:43:36.918: INFO: Pod "pod-subpath-test-preprovisionedpv-72th" satisfied condition "Succeeded or Failed" Jun 21 20:43:37.023: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-72th container test-container-subpath-preprovisionedpv-72th: <nil> [1mSTEP[0m: delete the pod Jun 21 20:43:37.283: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-72th to disappear Jun 21 20:43:37.379: INFO: Pod pod-subpath-test-preprovisionedpv-72th no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-72th Jun 21 20:43:37.379: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-72th" in namespace "provisioning-5298" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":14,"skipped":109,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:43:38.785: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 81 lines ... Jun 21 20:43:38.795: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 21 20:43:39.385: INFO: Waiting up to 5m0s for pod "pod-6b4714d9-8f3b-4281-bf8f-a4aa500cfb79" in namespace "emptydir-8072" to be "Succeeded or Failed" Jun 21 20:43:39.482: INFO: Pod "pod-6b4714d9-8f3b-4281-bf8f-a4aa500cfb79": Phase="Pending", Reason="", readiness=false. Elapsed: 96.714894ms Jun 21 20:43:41.580: INFO: Pod "pod-6b4714d9-8f3b-4281-bf8f-a4aa500cfb79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194805359s [1mSTEP[0m: Saw pod success Jun 21 20:43:41.580: INFO: Pod "pod-6b4714d9-8f3b-4281-bf8f-a4aa500cfb79" satisfied condition "Succeeded or Failed" Jun 21 20:43:41.677: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-6b4714d9-8f3b-4281-bf8f-a4aa500cfb79 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:43:41.888: INFO: Waiting for pod pod-6b4714d9-8f3b-4281-bf8f-a4aa500cfb79 to disappear Jun 21 20:43:41.987: INFO: Pod pod-6b4714d9-8f3b-4281-bf8f-a4aa500cfb79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:43:41.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-8072" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":113,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:43:42.189: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 36 lines ... [32m• [SLOW TEST:247.128 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] API priority and fairness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 45 lines ... [32m• [SLOW TEST:34.392 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a local redirect http liveness probe [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:282[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":16,"skipped":154,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:04.154: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 201 lines ... [32m• [SLOW TEST:9.622 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m deployment should support proportional scaling [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":9,"skipped":48,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:05.776: INFO: Only supported for providers [openstack] (not aws) ... skipping 55 lines ... Jun 21 20:43:35.949: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} [1mSTEP[0m: creating a StorageClass volume-expand-3785pc9q2 [1mSTEP[0m: creating a claim Jun 21 20:43:36.054: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Expanding non-expandable pvc Jun 21 20:43:36.250: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 21 20:43:36.449: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:38.652: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:40.650: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:42.663: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:44.645: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:46.644: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:48.694: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:50.646: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:52.645: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:54.646: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:56.646: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:43:58.652: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:44:00.663: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:44:02.644: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:44:04.647: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:44:06.646: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3785pc9q2", ... // 3 identical fields } Jun 21 20:44:06.843: INFO: Error updating pvc aws6rgcp: PersistentVolumeClaim "aws6rgcp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 24 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not allow expansion of pvcs without AllowVolumeExpansion property [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":10,"skipped":57,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:07.460: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 29 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:44:07.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "lease-test-4440" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:07.862: INFO: Only supported for providers [gce gke] (not aws) ... skipping 92 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should implement legacy replacement when the update strategy is OnDelete [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:505[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":15,"skipped":80,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 102 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":14,"skipped":107,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:15.617: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 14 lines ... [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":9,"skipped":88,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:41:47.842: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename services [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 127 lines ... Jun 21 20:44:10.160: INFO: Received response from host: affinity-clusterip-wzmxz Jun 21 20:44:10.160: INFO: Received response from host: affinity-clusterip-wkrjk Jun 21 20:44:10.160: INFO: Received response from host: affinity-clusterip-h4ddt Jun 21 20:44:10.160: INFO: Received response from host: affinity-clusterip-wzmxz Jun 21 20:44:10.160: INFO: Received response from host: affinity-clusterip-wzmxz Jun 21 20:44:10.160: INFO: [affinity-clusterip-h4ddt affinity-clusterip-h4ddt affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-wzmxz affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wzmxz affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wkrjk affinity-clusterip-wkrjk affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-wkrjk affinity-clusterip-h4ddt affinity-clusterip-wzmxz affinity-clusterip-wzmxz] Jun 21 20:44:10.161: FAIL: Affinity should hold but didn't. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity({0x78e7e70, 0xc006b1af00}, 0x0, {0xc0044f2f80, 0x6eec963}, 0x10, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:209 +0x1b7 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x6ef70c8, {0x78e7e70, 0xc006b1af00}, 0xc0009be000, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2963 +0x6aa ... skipping 33 lines ... Jun 21 20:44:16.607: INFO: At 2022-06-21 20:41:51 +0000 UTC - event for affinity-clusterip-wzmxz: {kubelet ip-172-20-0-54.eu-west-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine Jun 21 20:44:16.607: INFO: At 2022-06-21 20:41:51 +0000 UTC - event for affinity-clusterip-wzmxz: {kubelet ip-172-20-0-54.eu-west-2.compute.internal} Started: Started container affinity-clusterip Jun 21 20:44:16.607: INFO: At 2022-06-21 20:41:51 +0000 UTC - event for affinity-clusterip-wzmxz: {kubelet ip-172-20-0-54.eu-west-2.compute.internal} Created: Created container affinity-clusterip Jun 21 20:44:16.607: INFO: At 2022-06-21 20:41:59 +0000 UTC - event for execpod-affinityzn6wf: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Started: Started container agnhost-container Jun 21 20:44:16.607: INFO: At 2022-06-21 20:41:59 +0000 UTC - event for execpod-affinityzn6wf: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine Jun 21 20:44:16.607: INFO: At 2022-06-21 20:41:59 +0000 UTC - event for execpod-affinityzn6wf: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Created: Created container agnhost-container Jun 21 20:44:16.607: INFO: At 2022-06-21 20:44:10 +0000 UTC - event for affinity-clusterip: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-2069/affinity-clusterip: Operation cannot be fulfilled on endpoints "affinity-clusterip": the object has been modified; please apply your changes to the latest version and try again Jun 21 20:44:16.607: INFO: At 2022-06-21 20:44:10 +0000 UTC - event for affinity-clusterip-h4ddt: {kubelet ip-172-20-0-148.eu-west-2.compute.internal} Killing: Stopping container affinity-clusterip Jun 21 20:44:16.607: INFO: At 2022-06-21 20:44:10 +0000 UTC - event for affinity-clusterip-wkrjk: {kubelet ip-172-20-0-5.eu-west-2.compute.internal} Killing: Stopping container affinity-clusterip Jun 21 20:44:16.607: INFO: At 2022-06-21 20:44:10 +0000 UTC - event for affinity-clusterip-wzmxz: {kubelet ip-172-20-0-54.eu-west-2.compute.internal} Killing: Stopping container affinity-clusterip Jun 21 20:44:16.607: INFO: At 2022-06-21 20:44:10 +0000 UTC - event for execpod-affinityzn6wf: {kubelet ip-172-20-0-246.eu-west-2.compute.internal} Killing: Stopping container agnhost-container Jun 21 20:44:16.705: INFO: POD NODE PHASE GRACE CONDITIONS Jun 21 20:44:16.705: INFO: ... skipping 347 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [91mJun 21 20:44:10.161: Affinity should hold but didn't.[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:209 [90m------------------------------[0m {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":88,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:21.434: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 121 lines ... [36mOnly supported for providers [azure] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":15,"skipped":114,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:21.457: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 151 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":17,"skipped":172,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 5 lines ... [It] should support readOnly file specified in the volumeMount [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380 Jun 21 20:44:21.947: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Jun 21 20:44:21.947: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-f5lm [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:44:22.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-f5lm" in namespace "provisioning-5466" to be "Succeeded or Failed" Jun 21 20:44:22.148: INFO: Pod "pod-subpath-test-inlinevolume-f5lm": Phase="Pending", Reason="", readiness=false. Elapsed: 97.615849ms Jun 21 20:44:24.245: INFO: Pod "pod-subpath-test-inlinevolume-f5lm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194989736s Jun 21 20:44:26.345: INFO: Pod "pod-subpath-test-inlinevolume-f5lm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294936469s [1mSTEP[0m: Saw pod success Jun 21 20:44:26.345: INFO: Pod "pod-subpath-test-inlinevolume-f5lm" satisfied condition "Succeeded or Failed" Jun 21 20:44:26.469: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-f5lm container test-container-subpath-inlinevolume-f5lm: <nil> [1mSTEP[0m: delete the pod Jun 21 20:44:26.675: INFO: Waiting for pod pod-subpath-test-inlinevolume-f5lm to disappear Jun 21 20:44:26.773: INFO: Pod pod-subpath-test-inlinevolume-f5lm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-f5lm Jun 21 20:44:26.773: INFO: Deleting pod "pod-subpath-test-inlinevolume-f5lm" in namespace "provisioning-5466" ... skipping 14 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":16,"skipped":119,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:27.170: INFO: Only supported for providers [gce gke] (not aws) ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: windows-gcepd] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m ... skipping 146 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":62,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode ... skipping 33 lines ... Jun 21 20:44:27.168: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 21 20:44:27.759: INFO: Waiting up to 5m0s for pod "security-context-095313f8-2639-4c86-b240-72e57498c5d1" in namespace "security-context-6670" to be "Succeeded or Failed" Jun 21 20:44:27.856: INFO: Pod "security-context-095313f8-2639-4c86-b240-72e57498c5d1": Phase="Pending", Reason="", readiness=false. Elapsed: 97.051915ms Jun 21 20:44:29.954: INFO: Pod "security-context-095313f8-2639-4c86-b240-72e57498c5d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195536902s [1mSTEP[0m: Saw pod success Jun 21 20:44:29.954: INFO: Pod "security-context-095313f8-2639-4c86-b240-72e57498c5d1" satisfied condition "Succeeded or Failed" Jun 21 20:44:30.051: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod security-context-095313f8-2639-4c86-b240-72e57498c5d1 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:44:30.256: INFO: Waiting for pod security-context-095313f8-2639-4c86-b240-72e57498c5d1 to disappear Jun 21 20:44:30.356: INFO: Pod security-context-095313f8-2639-4c86-b240-72e57498c5d1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:44:30.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-6670" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":18,"skipped":175,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 23 lines ... Jun 21 20:44:27.126: INFO: PersistentVolumeClaim pvc-88w8j found but phase is Pending instead of Bound. Jun 21 20:44:29.228: INFO: PersistentVolumeClaim pvc-88w8j found and phase=Bound (12.692270209s) Jun 21 20:44:29.228: INFO: Waiting up to 3m0s for PersistentVolume local-zxzz9 to have phase Bound Jun 21 20:44:29.330: INFO: PersistentVolume local-zxzz9 found and phase=Bound (102.486138ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-pjzn [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:44:29.626: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pjzn" in namespace "provisioning-7443" to be "Succeeded or Failed" Jun 21 20:44:29.723: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn": Phase="Pending", Reason="", readiness=false. Elapsed: 97.353236ms Jun 21 20:44:31.825: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198642917s Jun 21 20:44:33.924: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297890437s [1mSTEP[0m: Saw pod success Jun 21 20:44:33.924: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn" satisfied condition "Succeeded or Failed" Jun 21 20:44:34.021: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-pjzn container test-container-subpath-preprovisionedpv-pjzn: <nil> [1mSTEP[0m: delete the pod Jun 21 20:44:34.225: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pjzn to disappear Jun 21 20:44:34.323: INFO: Pod pod-subpath-test-preprovisionedpv-pjzn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-pjzn Jun 21 20:44:34.324: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pjzn" in namespace "provisioning-7443" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-pjzn [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:44:34.521: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pjzn" in namespace "provisioning-7443" to be "Succeeded or Failed" Jun 21 20:44:34.617: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn": Phase="Pending", Reason="", readiness=false. Elapsed: 96.551092ms Jun 21 20:44:36.716: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195175383s Jun 21 20:44:38.813: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.292579197s [1mSTEP[0m: Saw pod success Jun 21 20:44:38.813: INFO: Pod "pod-subpath-test-preprovisionedpv-pjzn" satisfied condition "Succeeded or Failed" Jun 21 20:44:38.910: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-pjzn container test-container-subpath-preprovisionedpv-pjzn: <nil> [1mSTEP[0m: delete the pod Jun 21 20:44:39.122: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pjzn to disappear Jun 21 20:44:39.219: INFO: Pod pod-subpath-test-preprovisionedpv-pjzn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-pjzn Jun 21 20:44:39.219: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pjzn" in namespace "provisioning-7443" ... skipping 26 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":16,"skipped":81,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:41.257: INFO: Only supported for providers [azure] (not aws) ... skipping 46 lines ... [32m• [SLOW TEST:18.178 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with terminating scopes. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":17,"skipped":134,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:45.376: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 83 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:252[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":16,"skipped":114,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:50.287: INFO: Only supported for providers [gce gke] (not aws) ... skipping 108 lines ... [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:44:50.302: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-57e62d4d-4d1b-4e2e-8dd5-c1debecbccef [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:44:50.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-6039" for this suite. ... skipping 73 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":12,"skipped":66,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 42 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: blockfs] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":17,"skipped":126,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 20:44:51.097: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-bf249f96-3f0c-4a45-a61c-77c70fe7f1d4 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 20:44:51.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540" in namespace "configmap-9921" to be "Succeeded or Failed" Jun 21 20:44:51.884: INFO: Pod "pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540": Phase="Pending", Reason="", readiness=false. Elapsed: 98.189851ms Jun 21 20:44:53.984: INFO: Pod "pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198314896s Jun 21 20:44:56.082: INFO: Pod "pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.296282857s [1mSTEP[0m: Saw pod success Jun 21 20:44:56.082: INFO: Pod "pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540" satisfied condition "Succeeded or Failed" Jun 21 20:44:56.179: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 20:44:56.397: INFO: Waiting for pod pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540 to disappear Jun 21 20:44:56.494: INFO: Pod pod-configmaps-63cc7b4d-34b6-451f-91d6-4d410c657540 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.600 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":126,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:44:56.714: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping ... skipping 93 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 21 20:44:56.796: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-82473655-d57c-4b4d-bf94-d9739dab9b77" in namespace "security-context-test-4852" to be "Succeeded or Failed" Jun 21 20:44:56.893: INFO: Pod "busybox-privileged-false-82473655-d57c-4b4d-bf94-d9739dab9b77": Phase="Pending", Reason="", readiness=false. Elapsed: 96.852244ms Jun 21 20:44:58.990: INFO: Pod "busybox-privileged-false-82473655-d57c-4b4d-bf94-d9739dab9b77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194496913s Jun 21 20:45:01.088: INFO: Pod "busybox-privileged-false-82473655-d57c-4b4d-bf94-d9739dab9b77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291935496s Jun 21 20:45:01.088: INFO: Pod "busybox-privileged-false-82473655-d57c-4b4d-bf94-d9739dab9b77" satisfied condition "Succeeded or Failed" Jun 21 20:45:01.188: INFO: Got logs for pod "busybox-privileged-false-82473655-d57c-4b4d-bf94-d9739dab9b77": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 20:45:01.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-4852" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a pod with privileged [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232[0m should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":74,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes ... skipping 9 lines ... Jun 21 20:44:21.979: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-1830q6qg7 [1mSTEP[0m: creating a claim Jun 21 20:44:22.079: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-lp6w [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 20:44:22.393: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-lp6w" in namespace "volume-1830" to be "Succeeded or Failed" Jun 21 20:44:22.491: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 97.996129ms Jun 21 20:44:24.594: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200367746s Jun 21 20:44:26.693: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299875703s Jun 21 20:44:28.792: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398823454s Jun 21 20:44:30.890: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49696321s Jun 21 20:44:32.988: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.594814678s Jun 21 20:44:35.086: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.692810815s Jun 21 20:44:37.187: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 14.793290374s Jun 21 20:44:39.292: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 16.898967896s Jun 21 20:44:41.396: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 19.003107274s Jun 21 20:44:43.495: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Pending", Reason="", readiness=false. Elapsed: 21.101350845s Jun 21 20:44:45.593: INFO: Pod "exec-volume-test-dynamicpv-lp6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.199954067s [1mSTEP[0m: Saw pod success Jun 21 20:44:45.593: INFO: Pod "exec-volume-test-dynamicpv-lp6w" satisfied condition "Succeeded or Failed" Jun 21 20:44:45.693: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod exec-volume-test-dynamicpv-lp6w container exec-container-dynamicpv-lp6w: <nil> [1mSTEP[0m: delete the pod Jun 21 20:44:45.919: INFO: Waiting for pod exec-volume-test-dynamicpv-lp6w to disappear Jun 21 20:44:46.022: INFO: Pod exec-volume-test-dynamicpv-lp6w no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-lp6w Jun 21 20:44:46.022: INFO: Deleting pod "exec-volume-test-dynamicpv-lp6w" in namespace "volume-1830" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":117,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral ... skipping 56 lines ... Jun 21 20:44:57.735: INFO: PersistentVolumeClaim pvc-8pf56 found but phase is Pending instead of Bound. Jun 21 20:44:59.833: INFO: PersistentVolumeClaim pvc-8pf56 found and phase=Bound (14.824582074s) Jun 21 20:44:59.833: INFO: Waiting up to 3m0s for PersistentVolume local-mgpnf to have phase Bound Jun 21 20:44:59.931: INFO: PersistentVolume local-mgpnf found and phase=Bound (98.208586ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-hpdv [1mSTEP[0m: Creating a pod to test subpath Jun 21 20:45:00.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hpdv" in namespace "provisioning-2032" to be "Succeeded or Failed" Jun 21 20:45:00.407: INFO: Pod "pod-subpath-test-preprovisionedpv-hpdv": Phase="Pending", Reason="", readiness=false. Elapsed: 134.178185ms Jun 21 20:45:02.507: INFO: Pod "pod-subpath-test-preprovisionedpv-hpdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234746034s Jun 21 20:45:04.615: INFO: Pod "pod-subpath-test-preprovisionedpv-hpdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342310232s Jun 21 20:45:06.713: INFO: Pod "pod-subpath-test-preprovisionedpv-hpdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.440402097s [1mSTEP[0m: Saw pod success Jun 21 20:45:06.713: INFO: Pod "pod-subpath-test-preprovisionedpv-hpdv" satisfied condition "Succeeded or Failed" Jun 21 20:45:06.810: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-hpdv container test-container-subpath-preprovisionedpv-hpdv: <nil> [1mSTEP[0m: delete the pod Jun 21 20:45:07.018: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hpdv to disappear Jun 21 20:45:07.120: INFO: Pod pod-subpath-test-preprovisionedpv-hpdv no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-hpdv Jun 21 20:45:07.120: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hpdv" in namespace "provisioning-2032" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":17,"skipped":83,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:45:08.507: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 118 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":18,"skipped":147,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 20:45:11.223: INFO: Driver emptydir doesn't support ext3 -- skipping ... skipping 39291 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:08:07.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-8245" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":18,"skipped":159,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:07.210: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 94 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 21:08:07.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f70ec09-edf4-4262-a555-f7b6f35302ad" in namespace "projected-9700" to be "Succeeded or Failed" Jun 21 21:08:07.922: INFO: Pod "downwardapi-volume-0f70ec09-edf4-4262-a555-f7b6f35302ad": Phase="Pending", Reason="", readiness=false. Elapsed: 97.010087ms Jun 21 21:08:10.021: INFO: Pod "downwardapi-volume-0f70ec09-edf4-4262-a555-f7b6f35302ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195847794s [1mSTEP[0m: Saw pod success Jun 21 21:08:10.021: INFO: Pod "downwardapi-volume-0f70ec09-edf4-4262-a555-f7b6f35302ad" satisfied condition "Succeeded or Failed" Jun 21 21:08:10.117: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod downwardapi-volume-0f70ec09-edf4-4262-a555-f7b6f35302ad container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:08:10.329: INFO: Waiting for pod downwardapi-volume-0f70ec09-edf4-4262-a555-f7b6f35302ad to disappear Jun 21 21:08:10.432: INFO: Pod downwardapi-volume-0f70ec09-edf4-4262-a555-f7b6f35302ad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:08:10.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-9700" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":174,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:08:10.629: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename resourcequota [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 11 lines ... [32m• [SLOW TEST:7.994 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":20,"skipped":174,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:18.626: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 32 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.","total":-1,"completed":24,"skipped":213,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:07:57.423: INFO: >>> kubeConfig: /root/.kube/config ... skipping 27 lines ... Jun 21 21:08:11.450: INFO: PersistentVolumeClaim pvc-ls7fr found but phase is Pending instead of Bound. Jun 21 21:08:13.548: INFO: PersistentVolumeClaim pvc-ls7fr found and phase=Bound (10.590496786s) Jun 21 21:08:13.548: INFO: Waiting up to 3m0s for PersistentVolume local-ln2jr to have phase Bound Jun 21 21:08:13.648: INFO: PersistentVolume local-ln2jr found and phase=Bound (100.289635ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-hs9w [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:08:13.949: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hs9w" in namespace "provisioning-1974" to be "Succeeded or Failed" Jun 21 21:08:14.137: INFO: Pod "pod-subpath-test-preprovisionedpv-hs9w": Phase="Pending", Reason="", readiness=false. Elapsed: 188.079557ms Jun 21 21:08:16.235: INFO: Pod "pod-subpath-test-preprovisionedpv-hs9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285793573s Jun 21 21:08:18.337: INFO: Pod "pod-subpath-test-preprovisionedpv-hs9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.387667159s [1mSTEP[0m: Saw pod success Jun 21 21:08:18.337: INFO: Pod "pod-subpath-test-preprovisionedpv-hs9w" satisfied condition "Succeeded or Failed" Jun 21 21:08:18.440: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-hs9w container test-container-volume-preprovisionedpv-hs9w: <nil> [1mSTEP[0m: delete the pod Jun 21 21:08:18.648: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hs9w to disappear Jun 21 21:08:18.746: INFO: Pod pod-subpath-test-preprovisionedpv-hs9w no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-hs9w Jun 21 21:08:18.746: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hs9w" in namespace "provisioning-1974" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":25,"skipped":213,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:22.448: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 51 lines ... Jun 21 21:08:11.402: INFO: PersistentVolumeClaim pvc-mtkp5 found but phase is Pending instead of Bound. Jun 21 21:08:13.503: INFO: PersistentVolumeClaim pvc-mtkp5 found and phase=Bound (6.396343694s) Jun 21 21:08:13.503: INFO: Waiting up to 3m0s for PersistentVolume local-hrnf6 to have phase Bound Jun 21 21:08:13.605: INFO: PersistentVolume local-hrnf6 found and phase=Bound (101.853195ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5rsv [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:08:13.940: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5rsv" in namespace "provisioning-7997" to be "Succeeded or Failed" Jun 21 21:08:14.137: INFO: Pod "pod-subpath-test-preprovisionedpv-5rsv": Phase="Pending", Reason="", readiness=false. Elapsed: 197.225944ms Jun 21 21:08:16.236: INFO: Pod "pod-subpath-test-preprovisionedpv-5rsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295647662s Jun 21 21:08:18.337: INFO: Pod "pod-subpath-test-preprovisionedpv-5rsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.397171661s [1mSTEP[0m: Saw pod success Jun 21 21:08:18.337: INFO: Pod "pod-subpath-test-preprovisionedpv-5rsv" satisfied condition "Succeeded or Failed" Jun 21 21:08:18.440: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-5rsv container test-container-subpath-preprovisionedpv-5rsv: <nil> [1mSTEP[0m: delete the pod Jun 21 21:08:18.658: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5rsv to disappear Jun 21 21:08:18.756: INFO: Pod pod-subpath-test-preprovisionedpv-5rsv no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5rsv Jun 21 21:08:18.756: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5rsv" in namespace "provisioning-7997" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":23,"skipped":149,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:22.493: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 100 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:08:23.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "metrics-grabber-7971" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":26,"skipped":226,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:08:18.638: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-b1da5048-7914-487c-aa04-a06ccb5eafd2 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 21:08:19.347: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda" in namespace "projected-4388" to be "Succeeded or Failed" Jun 21 21:08:19.444: INFO: Pod "pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda": Phase="Pending", Reason="", readiness=false. Elapsed: 97.001612ms Jun 21 21:08:21.557: INFO: Pod "pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209981817s Jun 21 21:08:23.678: INFO: Pod "pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331533245s [1mSTEP[0m: Saw pod success Jun 21 21:08:23.678: INFO: Pod "pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda" satisfied condition "Succeeded or Failed" Jun 21 21:08:23.804: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda container projected-configmap-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 21:08:24.116: INFO: Waiting for pod pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda to disappear Jun 21 21:08:24.221: INFO: Pod pod-projected-configmaps-f267a2b1-ab90-4665-8f8f-b7f9ced8acda no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.813 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":183,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:08:24.460: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Jun 21 21:08:25.047: INFO: Waiting up to 5m0s for pod "pod-1d54b131-92fe-41db-86fa-4181f558fbb4" in namespace "emptydir-7345" to be "Succeeded or Failed" Jun 21 21:08:25.144: INFO: Pod "pod-1d54b131-92fe-41db-86fa-4181f558fbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 96.774686ms Jun 21 21:08:27.245: INFO: Pod "pod-1d54b131-92fe-41db-86fa-4181f558fbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197700055s Jun 21 21:08:29.343: INFO: Pod "pod-1d54b131-92fe-41db-86fa-4181f558fbb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.296068388s [1mSTEP[0m: Saw pod success Jun 21 21:08:29.343: INFO: Pod "pod-1d54b131-92fe-41db-86fa-4181f558fbb4" satisfied condition "Succeeded or Failed" Jun 21 21:08:29.440: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-1d54b131-92fe-41db-86fa-4181f558fbb4 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:08:29.708: INFO: Waiting for pod pod-1d54b131-92fe-41db-86fa-4181f558fbb4 to disappear Jun 21 21:08:29.811: INFO: Pod pod-1d54b131-92fe-41db-86fa-4181f558fbb4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.551 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":187,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:30.017: INFO: Only supported for providers [vsphere] (not aws) ... skipping 62 lines ... [32m• [SLOW TEST:8.190 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should update annotations on modification [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":227,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath ... skipping 9 lines ... Jun 21 21:07:47.571: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-1488lxd9r [1mSTEP[0m: creating a claim Jun 21 21:07:47.675: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-cj78 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 21:07:48.032: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cj78" in namespace "provisioning-1488" to be "Succeeded or Failed" Jun 21 21:07:48.141: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Pending", Reason="", readiness=false. Elapsed: 109.36674ms Jun 21 21:07:50.240: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20779393s Jun 21 21:07:52.337: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305467124s Jun 21 21:07:54.439: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40720597s Jun 21 21:07:56.560: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527830808s Jun 21 21:07:58.665: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Pending", Reason="", readiness=false. Elapsed: 10.632949085s ... skipping 5 lines ... Jun 21 21:08:11.264: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Running", Reason="", readiness=true. Elapsed: 23.232002689s Jun 21 21:08:13.364: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Running", Reason="", readiness=true. Elapsed: 25.3318143s Jun 21 21:08:15.461: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Running", Reason="", readiness=true. Elapsed: 27.42917826s Jun 21 21:08:17.558: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Running", Reason="", readiness=true. Elapsed: 29.526289098s Jun 21 21:08:19.677: INFO: Pod "pod-subpath-test-dynamicpv-cj78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.645377604s [1mSTEP[0m: Saw pod success Jun 21 21:08:19.677: INFO: Pod "pod-subpath-test-dynamicpv-cj78" satisfied condition "Succeeded or Failed" Jun 21 21:08:19.782: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-cj78 container test-container-subpath-dynamicpv-cj78: <nil> [1mSTEP[0m: delete the pod Jun 21 21:08:19.992: INFO: Waiting for pod pod-subpath-test-dynamicpv-cj78 to disappear Jun 21 21:08:20.089: INFO: Pod pod-subpath-test-dynamicpv-cj78 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-cj78 Jun 21 21:08:20.089: INFO: Deleting pod "pod-subpath-test-dynamicpv-cj78" in namespace "provisioning-1488" ... skipping 20 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":22,"skipped":187,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:36.342: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 131 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":24,"skipped":163,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:39.656: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping ... skipping 146 lines ... Jun 21 20:58:41.014: INFO: Wait up to 5m0s for pod PV pvc-772d7904-cd41-404b-8a9b-5ad315f1378f to be fully deleted Jun 21 20:58:41.015: INFO: Waiting up to 5m0s for PersistentVolume pvc-772d7904-cd41-404b-8a9b-5ad315f1378f to get deleted Jun 21 20:58:41.149: INFO: PersistentVolume pvc-772d7904-cd41-404b-8a9b-5ad315f1378f was removed [1mSTEP[0m: Deleting sc [1mSTEP[0m: deleting the test namespace: ephemeral-7983 [1mSTEP[0m: Waiting for namespaces [ephemeral-7983] to vanish Jun 21 21:03:42.096: INFO: error deleting namespace ephemeral-7983: timed out waiting for the condition [1mSTEP[0m: uninstalling csi csi-hostpath driver Jun 21 21:03:42.096: INFO: deleting *v1.ServiceAccount: ephemeral-7983-6669/csi-attacher Jun 21 21:03:42.201: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-7983 Jun 21 21:03:42.299: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-7983 Jun 21 21:03:42.397: INFO: deleting *v1.Role: ephemeral-7983-6669/external-attacher-cfg-ephemeral-7983 Jun 21 21:03:42.494: INFO: deleting *v1.RoleBinding: ephemeral-7983-6669/csi-attacher-role-cfg ... skipping 30 lines ... Jun 21 21:03:45.871: INFO: deleting *v1.RoleBinding: ephemeral-7983-6669/csi-hostpathplugin-resizer-role Jun 21 21:03:45.975: INFO: deleting *v1.RoleBinding: ephemeral-7983-6669/csi-hostpathplugin-snapshotter-role Jun 21 21:03:46.073: INFO: deleting *v1.StatefulSet: ephemeral-7983-6669/csi-hostpathplugin Jun 21 21:03:46.171: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-7983 [1mSTEP[0m: deleting the driver namespace: ephemeral-7983-6669 [1mSTEP[0m: Waiting for namespaces [ephemeral-7983-6669] to vanish Jun 21 21:08:46.773: INFO: error deleting namespace ephemeral-7983-6669: timed out waiting for the condition [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:08:46.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ephemeral-7983" for this suite. [1mSTEP[0m: Destroying namespace "ephemeral-7983-6669" for this suite. ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:214[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":19,"skipped":118,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:47.126: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 41 lines ... Jun 21 21:08:42.369: INFO: PersistentVolumeClaim pvc-jlf2b found but phase is Pending instead of Bound. Jun 21 21:08:44.475: INFO: PersistentVolumeClaim pvc-jlf2b found and phase=Bound (4.307913496s) Jun 21 21:08:44.475: INFO: Waiting up to 3m0s for PersistentVolume local-t8mxh to have phase Bound Jun 21 21:08:44.574: INFO: PersistentVolume local-t8mxh found and phase=Bound (98.965433ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-5nf6 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 21:08:44.865: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-5nf6" in namespace "volume-1515" to be "Succeeded or Failed" Jun 21 21:08:44.986: INFO: Pod "exec-volume-test-preprovisionedpv-5nf6": Phase="Pending", Reason="", readiness=false. Elapsed: 121.444144ms Jun 21 21:08:47.084: INFO: Pod "exec-volume-test-preprovisionedpv-5nf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219614265s Jun 21 21:08:49.182: INFO: Pod "exec-volume-test-preprovisionedpv-5nf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317499104s [1mSTEP[0m: Saw pod success Jun 21 21:08:49.182: INFO: Pod "exec-volume-test-preprovisionedpv-5nf6" satisfied condition "Succeeded or Failed" Jun 21 21:08:49.280: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-5nf6 container exec-container-preprovisionedpv-5nf6: <nil> [1mSTEP[0m: delete the pod Jun 21 21:08:49.537: INFO: Waiting for pod exec-volume-test-preprovisionedpv-5nf6 to disappear Jun 21 21:08:49.735: INFO: Pod exec-volume-test-preprovisionedpv-5nf6 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-5nf6 Jun 21 21:08:49.735: INFO: Deleting pod "exec-volume-test-preprovisionedpv-5nf6" in namespace "volume-1515" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":23,"skipped":193,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:08:51.058: INFO: Driver local doesn't support ext3 -- skipping ... skipping 231 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 21:08:57.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85" in namespace "projected-1265" to be "Succeeded or Failed" Jun 21 21:08:57.932: INFO: Pod "downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85": Phase="Pending", Reason="", readiness=false. Elapsed: 97.015919ms Jun 21 21:09:00.029: INFO: Pod "downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194182158s Jun 21 21:09:02.128: INFO: Pod "downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.29323381s [1mSTEP[0m: Saw pod success Jun 21 21:09:02.128: INFO: Pod "downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85" satisfied condition "Succeeded or Failed" Jun 21 21:09:02.227: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:09:02.490: INFO: Waiting for pod downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85 to disappear Jun 21 21:09:02.591: INFO: Pod downwardapi-volume-cf6124e4-4b5b-4351-877a-38916970bc85 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.560 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide container's memory limit [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":254,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:02.800: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 63 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:09:06.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-1365" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":29,"skipped":262,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:06.719: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 123 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23[0m Granular Checks: Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30[0m should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":175,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:10.713: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 43 lines ... Jun 21 21:08:57.343: INFO: PersistentVolumeClaim pvc-jgp9b found but phase is Pending instead of Bound. Jun 21 21:08:59.442: INFO: PersistentVolumeClaim pvc-jgp9b found and phase=Bound (8.499032872s) Jun 21 21:08:59.442: INFO: Waiting up to 3m0s for PersistentVolume local-x9psx to have phase Bound Jun 21 21:08:59.540: INFO: PersistentVolume local-x9psx found and phase=Bound (97.732906ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xn8s [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:08:59.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xn8s" in namespace "provisioning-1631" to be "Succeeded or Failed" Jun 21 21:08:59.941: INFO: Pod "pod-subpath-test-preprovisionedpv-xn8s": Phase="Pending", Reason="", readiness=false. Elapsed: 98.149947ms Jun 21 21:09:02.041: INFO: Pod "pod-subpath-test-preprovisionedpv-xn8s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198018162s Jun 21 21:09:04.149: INFO: Pod "pod-subpath-test-preprovisionedpv-xn8s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305475041s Jun 21 21:09:06.248: INFO: Pod "pod-subpath-test-preprovisionedpv-xn8s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404482596s Jun 21 21:09:08.373: INFO: Pod "pod-subpath-test-preprovisionedpv-xn8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.530381352s [1mSTEP[0m: Saw pod success Jun 21 21:09:08.374: INFO: Pod "pod-subpath-test-preprovisionedpv-xn8s" satisfied condition "Succeeded or Failed" Jun 21 21:09:08.543: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-xn8s container test-container-volume-preprovisionedpv-xn8s: <nil> [1mSTEP[0m: delete the pod Jun 21 21:09:08.792: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xn8s to disappear Jun 21 21:09:08.896: INFO: Pod pod-subpath-test-preprovisionedpv-xn8s no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xn8s Jun 21 21:09:08.896: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xn8s" in namespace "provisioning-1631" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":20,"skipped":126,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:10.845: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 43 lines ... [32m• [SLOW TEST:5.941 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":179,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:16.662: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 36 lines ... [32m• [SLOW TEST:6.103 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m evictions: enough pods, absolute => should allow an eviction [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":21,"skipped":134,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 108 lines ... [32m• [SLOW TEST:18.580 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with best effort scope. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":30,"skipped":275,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 29 lines ... [32m• [SLOW TEST:9.641 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should support configurable pod resolv.conf [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":22,"skipped":160,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:28.148: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping ... skipping 64 lines ... Jun 21 20:57:15.221: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9222 Jun 21 20:57:15.325: INFO: creating *v1.StatefulSet: csi-mock-volumes-9222-3730/csi-mockplugin-attacher Jun 21 20:57:15.426: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9222" Jun 21 20:57:15.528: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9222 to register on node ip-172-20-0-5.eu-west-2.compute.internal [1mSTEP[0m: Creating pod [1mSTEP[0m: checking for CSIInlineVolumes feature Jun 21 20:57:25.341: INFO: Error getting logs for pod inline-volume-7z5jk: the server rejected our request for an unknown reason (get pods inline-volume-7z5jk) Jun 21 20:57:25.449: INFO: Deleting pod "inline-volume-7z5jk" in namespace "csi-mock-volumes-9222" Jun 21 20:57:25.547: INFO: Wait up to 5m0s for pod "inline-volume-7z5jk" to be fully deleted [1mSTEP[0m: Deleting the previously created pod Jun 21 20:59:29.770: INFO: Deleting pod "pvc-volume-tester-4xn8z" in namespace "csi-mock-volumes-9222" Jun 21 20:59:29.947: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4xn8z" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 21 20:59:32.359: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 7cb1854b-1bca-44c1-837c-f4f9a6f0b226 Jun 21 20:59:32.359: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Jun 21 20:59:32.359: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Jun 21 20:59:32.359: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-4xn8z Jun 21 20:59:32.359: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9222 Jun 21 20:59:32.359: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-4d0418b58e9c393e6a0849ad35a1d31f1e6069d149b07aa8ee9e060f81118971","target_path":"/var/lib/kubelet/pods/7cb1854b-1bca-44c1-837c-f4f9a6f0b226/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-4xn8z Jun 21 20:59:32.359: INFO: Deleting pod "pvc-volume-tester-4xn8z" in namespace "csi-mock-volumes-9222" [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-9222 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-9222] to vanish Jun 21 21:04:32.918: INFO: error deleting namespace csi-mock-volumes-9222: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:04:32.918: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9222-3730/csi-attacher Jun 21 21:04:33.021: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9222 Jun 21 21:04:33.122: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9222 Jun 21 21:04:33.241: INFO: deleting *v1.Role: csi-mock-volumes-9222-3730/external-attacher-cfg-csi-mock-volumes-9222 Jun 21 21:04:33.339: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9222-3730/csi-attacher-role-cfg ... skipping 22 lines ... Jun 21 21:04:35.809: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9222 Jun 21 21:04:35.907: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9222-3730/csi-mockplugin Jun 21 21:04:36.004: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9222 Jun 21 21:04:36.108: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9222-3730/csi-mockplugin-attacher [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-9222-3730 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-9222-3730] to vanish Jun 21 21:09:36.767: INFO: error deleting namespace csi-mock-volumes-9222-3730: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:09:36.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-9222" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-9222-3730" for this suite. ... skipping 59 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should support exec using resource/name [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":31,"skipped":280,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:41.071: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 78 lines ... [36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":26,"skipped":236,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:09:37.164: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-8a0b8c1f-68df-4181-a73f-d5130a65aceb [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 21:09:37.886: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b" in namespace "configmap-5609" to be "Succeeded or Failed" Jun 21 21:09:38.054: INFO: Pod "pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b": Phase="Pending", Reason="", readiness=false. Elapsed: 167.614541ms Jun 21 21:09:40.153: INFO: Pod "pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b": Phase="Running", Reason="", readiness=true. Elapsed: 2.266517804s Jun 21 21:09:42.252: INFO: Pod "pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.365861524s [1mSTEP[0m: Saw pod success Jun 21 21:09:42.252: INFO: Pod "pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b" satisfied condition "Succeeded or Failed" Jun 21 21:09:42.352: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:09:42.659: INFO: Waiting for pod pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b to disappear Jun 21 21:09:42.769: INFO: Pod pod-configmaps-ba988e75-b2ac-405f-a70e-93750213086b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 126 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should resize volume when PVC is edited while pod is using it [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":27,"skipped":181,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":236,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:09:42.975: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 78 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":28,"skipped":236,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:09:57.423: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 25 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: cinder] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [openstack] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092 [90m------------------------------[0m ... skipping 24 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:09:59.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-2366" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":29,"skipped":240,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:09:57.079: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 21 21:09:57.701: INFO: Waiting up to 5m0s for pod "security-context-c45572e0-de0b-4d05-ae53-c20343101461" in namespace "security-context-7209" to be "Succeeded or Failed" Jun 21 21:09:57.800: INFO: Pod "security-context-c45572e0-de0b-4d05-ae53-c20343101461": Phase="Pending", Reason="", readiness=false. Elapsed: 99.174043ms Jun 21 21:09:59.901: INFO: Pod "security-context-c45572e0-de0b-4d05-ae53-c20343101461": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199881307s Jun 21 21:10:01.999: INFO: Pod "security-context-c45572e0-de0b-4d05-ae53-c20343101461": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297420987s [1mSTEP[0m: Saw pod success Jun 21 21:10:01.999: INFO: Pod "security-context-c45572e0-de0b-4d05-ae53-c20343101461" satisfied condition "Succeeded or Failed" Jun 21 21:10:02.097: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod security-context-c45572e0-de0b-4d05-ae53-c20343101461 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:10:02.304: INFO: Waiting for pod security-context-c45572e0-de0b-4d05-ae53-c20343101461 to disappear Jun 21 21:10:02.402: INFO: Pod security-context-c45572e0-de0b-4d05-ae53-c20343101461 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 15 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 21:10:00.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32ef7b75-c395-4669-98fa-fd81e810eec9" in namespace "projected-2573" to be "Succeeded or Failed" Jun 21 21:10:00.801: INFO: Pod "downwardapi-volume-32ef7b75-c395-4669-98fa-fd81e810eec9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.658634ms Jun 21 21:10:02.901: INFO: Pod "downwardapi-volume-32ef7b75-c395-4669-98fa-fd81e810eec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21028565s [1mSTEP[0m: Saw pod success Jun 21 21:10:02.901: INFO: Pod "downwardapi-volume-32ef7b75-c395-4669-98fa-fd81e810eec9" satisfied condition "Succeeded or Failed" Jun 21 21:10:03.001: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod downwardapi-volume-32ef7b75-c395-4669-98fa-fd81e810eec9 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:10:03.248: INFO: Waiting for pod downwardapi-volume-32ef7b75-c395-4669-98fa-fd81e810eec9 to disappear Jun 21 21:10:03.346: INFO: Pod downwardapi-volume-32ef7b75-c395-4669-98fa-fd81e810eec9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:10:03.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-2573" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":241,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 83 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should perform rolling updates and roll backs of template modifications with PVCs [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:290[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","total":-1,"completed":29,"skipped":218,"failed":0} [BeforeEach] [sig-api-machinery] Server request timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:10:03.924: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename request-timeout [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 13 lines ... [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-5f6ea4c9-e39a-4983-bf79-5b19efc9df8c [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 21:10:04.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3" in namespace "configmap-114" to be "Succeeded or Failed" Jun 21 21:10:04.418: INFO: Pod "pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3": Phase="Pending", Reason="", readiness=false. Elapsed: 102.812253ms Jun 21 21:10:06.517: INFO: Pod "pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20187959s Jun 21 21:10:08.616: INFO: Pod "pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.300415593s [1mSTEP[0m: Saw pod success Jun 21 21:10:08.616: INFO: Pod "pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3" satisfied condition "Succeeded or Failed" Jun 21 21:10:08.715: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:10:08.941: INFO: Waiting for pod pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3 to disappear Jun 21 21:10:09.038: INFO: Pod pod-configmaps-ebd379e7-0dd1-4547-8eeb-2a40627a06e3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.678 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":245,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:10:10.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-6141" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":32,"skipped":249,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning ... skipping 121 lines ... Jun 21 21:00:08.606: INFO: PersistentVolume pvc-e929d15d-a2e7-407f-ba2d-aab428c4e6dc was removed Jun 21 21:00:08.606: INFO: deleting claim "provisioning-5729"/"pvc-whj2x" Jun 21 21:00:08.710: INFO: deleting source PVC "provisioning-5729"/"pvc-4cwbc" Jun 21 21:00:08.814: INFO: deleting storage class provisioning-57297mnrd [1mSTEP[0m: deleting the test namespace: provisioning-5729 [1mSTEP[0m: Waiting for namespaces [provisioning-5729] to vanish Jun 21 21:05:09.380: INFO: error deleting namespace provisioning-5729: timed out waiting for the condition [1mSTEP[0m: uninstalling csi csi-hostpath driver Jun 21 21:05:09.380: INFO: deleting *v1.ServiceAccount: provisioning-5729-6675/csi-attacher Jun 21 21:05:09.478: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-5729 Jun 21 21:05:09.606: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-5729 Jun 21 21:05:09.704: INFO: deleting *v1.Role: provisioning-5729-6675/external-attacher-cfg-provisioning-5729 Jun 21 21:05:09.834: INFO: deleting *v1.RoleBinding: provisioning-5729-6675/csi-attacher-role-cfg ... skipping 30 lines ... Jun 21 21:05:13.197: INFO: deleting *v1.RoleBinding: provisioning-5729-6675/csi-hostpathplugin-resizer-role Jun 21 21:05:13.299: INFO: deleting *v1.RoleBinding: provisioning-5729-6675/csi-hostpathplugin-snapshotter-role Jun 21 21:05:13.400: INFO: deleting *v1.StatefulSet: provisioning-5729-6675/csi-hostpathplugin Jun 21 21:05:13.499: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-5729 [1mSTEP[0m: deleting the driver namespace: provisioning-5729-6675 [1mSTEP[0m: Waiting for namespaces [provisioning-5729-6675] to vanish Jun 21 21:10:14.241: INFO: error deleting namespace provisioning-5729-6675: timed out waiting for the condition [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:10:14.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-5729" for this suite. [1mSTEP[0m: Destroying namespace "provisioning-5729-6675" for this suite. ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":11,"skipped":68,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:10:14.538: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping ... skipping 23 lines ... Jun 21 21:10:14.552: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test substitution in container's command Jun 21 21:10:15.141: INFO: Waiting up to 5m0s for pod "var-expansion-d42ad505-c819-4e49-a845-d7af5efc55c4" in namespace "var-expansion-9062" to be "Succeeded or Failed" Jun 21 21:10:15.238: INFO: Pod "var-expansion-d42ad505-c819-4e49-a845-d7af5efc55c4": Phase="Pending", Reason="", readiness=false. Elapsed: 96.625335ms Jun 21 21:10:17.336: INFO: Pod "var-expansion-d42ad505-c819-4e49-a845-d7af5efc55c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194139656s [1mSTEP[0m: Saw pod success Jun 21 21:10:17.336: INFO: Pod "var-expansion-d42ad505-c819-4e49-a845-d7af5efc55c4" satisfied condition "Succeeded or Failed" Jun 21 21:10:17.433: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod var-expansion-d42ad505-c819-4e49-a845-d7af5efc55c4 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:10:17.699: INFO: Waiting for pod var-expansion-d42ad505-c819-4e49-a845-d7af5efc55c4 to disappear Jun 21 21:10:17.796: INFO: Pod var-expansion-d42ad505-c819-4e49-a845-d7af5efc55c4 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:10:17.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-9062" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":80,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 5 lines ... [It] should support file as subpath [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230 Jun 21 21:09:55.938: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 21 21:09:56.078: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-hdzc [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 21:09:56.195: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hdzc" in namespace "provisioning-4903" to be "Succeeded or Failed" Jun 21 21:09:56.296: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Pending", Reason="", readiness=false. Elapsed: 100.772368ms Jun 21 21:09:58.398: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203611992s Jun 21 21:10:00.498: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 4.303105443s Jun 21 21:10:02.603: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 6.408056504s Jun 21 21:10:04.714: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 8.519745476s Jun 21 21:10:06.815: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 10.619755055s Jun 21 21:10:08.915: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 12.720629124s Jun 21 21:10:11.017: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 14.821975951s Jun 21 21:10:13.124: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 16.929069789s Jun 21 21:10:15.224: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 19.029282769s Jun 21 21:10:17.325: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Running", Reason="", readiness=true. Elapsed: 21.129873659s Jun 21 21:10:19.440: INFO: Pod "pod-subpath-test-inlinevolume-hdzc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.245689551s [1mSTEP[0m: Saw pod success Jun 21 21:10:19.440: INFO: Pod "pod-subpath-test-inlinevolume-hdzc" satisfied condition "Succeeded or Failed" Jun 21 21:10:19.541: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-hdzc container test-container-subpath-inlinevolume-hdzc: <nil> [1mSTEP[0m: delete the pod Jun 21 21:10:19.791: INFO: Waiting for pod pod-subpath-test-inlinevolume-hdzc to disappear Jun 21 21:10:19.906: INFO: Pod pod-subpath-test-inlinevolume-hdzc no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-hdzc Jun 21 21:10:19.906: INFO: Deleting pod "pod-subpath-test-inlinevolume-hdzc" in namespace "provisioning-4903" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":23,"skipped":173,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:10:20.318: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 158 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: hostPath] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver hostPath doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 316 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should provide basic identity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:130[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":44,"skipped":339,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:10:29.778: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename kubectl [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 52 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl label [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329[0m should update the label on a resource [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":45,"skipped":339,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:10:40.779: INFO: Only supported for providers [gce gke] (not aws) ... skipping 135 lines ... Jun 21 21:00:22.858: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-68xwk] to have phase Bound Jun 21 21:00:22.955: INFO: PersistentVolumeClaim pvc-68xwk found and phase=Bound (96.116127ms) [1mSTEP[0m: Deleting the previously created pod Jun 21 21:00:27.476: INFO: Deleting pod "pvc-volume-tester-9wn59" in namespace "csi-mock-volumes-6843" Jun 21 21:00:27.575: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9wn59" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 21 21:00:31.888: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/07bc1735-c1bf-4619-aab5-fb34bfda1cf0/volumes/kubernetes.io~csi/pvc-2e626429-b8af-4278-98cc-081c5d9c359b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-9wn59 Jun 21 21:00:31.888: INFO: Deleting pod "pvc-volume-tester-9wn59" in namespace "csi-mock-volumes-6843" [1mSTEP[0m: Deleting claim pvc-68xwk Jun 21 21:00:32.188: INFO: Waiting up to 2m0s for PersistentVolume pvc-2e626429-b8af-4278-98cc-081c5d9c359b to get deleted Jun 21 21:00:32.284: INFO: PersistentVolume pvc-2e626429-b8af-4278-98cc-081c5d9c359b found and phase=Released (96.437196ms) Jun 21 21:00:34.382: INFO: PersistentVolume pvc-2e626429-b8af-4278-98cc-081c5d9c359b found and phase=Released (2.194552954s) Jun 21 21:00:36.483: INFO: PersistentVolume pvc-2e626429-b8af-4278-98cc-081c5d9c359b was removed [1mSTEP[0m: Deleting storageclass csi-mock-volumes-6843-sclhlfd [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-6843 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-6843] to vanish Jun 21 21:05:37.070: INFO: error deleting namespace csi-mock-volumes-6843: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:05:37.070: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6843-7735/csi-attacher Jun 21 21:05:37.180: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6843 Jun 21 21:05:37.287: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6843 Jun 21 21:05:37.392: INFO: deleting *v1.Role: csi-mock-volumes-6843-7735/external-attacher-cfg-csi-mock-volumes-6843 Jun 21 21:05:37.491: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6843-7735/csi-attacher-role-cfg ... skipping 21 lines ... Jun 21 21:05:40.037: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6843 Jun 21 21:05:40.135: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6843 Jun 21 21:05:40.234: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6843-7735/csi-mockplugin Jun 21 21:05:40.331: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6843-7735/csi-mockplugin-attacher [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-6843-7735 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-6843-7735] to vanish Jun 21 21:10:41.058: INFO: error deleting namespace csi-mock-volumes-6843-7735: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:10:41.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-6843" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-6843-7735" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI workload information using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:469[0m should not be passed when CSIDriver does not exist [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:519[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":24,"skipped":183,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:10:41.390: INFO: Only supported for providers [vsphere] (not aws) ... skipping 142 lines ... Jun 21 21:00:35.856: INFO: PersistentVolume pvc-98fd9062-9107-4b2f-a927-13192cd05698 was removed Jun 21 21:00:35.856: INFO: deleting claim "provisioning-6364"/"pvc-rjxqd" Jun 21 21:00:36.035: INFO: deleting source PVC "provisioning-6364"/"pvc-mlhh6" Jun 21 21:00:36.189: INFO: deleting storage class provisioning-636498rpt [1mSTEP[0m: deleting the test namespace: provisioning-6364 [1mSTEP[0m: Waiting for namespaces [provisioning-6364] to vanish Jun 21 21:05:36.839: INFO: error deleting namespace provisioning-6364: timed out waiting for the condition [1mSTEP[0m: uninstalling csi csi-hostpath driver Jun 21 21:05:36.839: INFO: deleting *v1.ServiceAccount: provisioning-6364-4827/csi-attacher Jun 21 21:05:36.944: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-6364 Jun 21 21:05:37.073: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-6364 Jun 21 21:05:37.185: INFO: deleting *v1.Role: provisioning-6364-4827/external-attacher-cfg-provisioning-6364 Jun 21 21:05:37.293: INFO: deleting *v1.RoleBinding: provisioning-6364-4827/csi-attacher-role-cfg ... skipping 30 lines ... Jun 21 21:05:40.843: INFO: deleting *v1.RoleBinding: provisioning-6364-4827/csi-hostpathplugin-resizer-role Jun 21 21:05:40.957: INFO: deleting *v1.RoleBinding: provisioning-6364-4827/csi-hostpathplugin-snapshotter-role Jun 21 21:05:41.059: INFO: deleting *v1.StatefulSet: provisioning-6364-4827/csi-hostpathplugin Jun 21 21:05:41.162: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-6364 [1mSTEP[0m: deleting the driver namespace: provisioning-6364-4827 [1mSTEP[0m: Waiting for namespaces [provisioning-6364-4827] to vanish Jun 21 21:10:41.805: INFO: error deleting namespace provisioning-6364-4827: timed out waiting for the condition [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:10:41.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-6364" for this suite. [1mSTEP[0m: Destroying namespace "provisioning-6364-4827" for this suite. ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (default fs)] provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":30,"skipped":287,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:10:42.119: INFO: Only supported for providers [gce gke] (not aws) ... skipping 64 lines ... [32m• [SLOW TEST:36.923 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should remove pods when job is deleted [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:185[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":33,"skipped":253,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:10:47.705: INFO: >>> kubeConfig: /root/.kube/config ... skipping 90 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":25,"skipped":190,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:10:51.134: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename kubectl [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 41 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl apply [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:816[0m apply set/view last-applied [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:851[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":26,"skipped":190,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 14 lines ... [32m• [SLOW TEST:41.730 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of same group and version but different kinds [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":24,"skipped":198,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:02.102: INFO: Only supported for providers [openstack] (not aws) ... skipping 45 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Jun 21 21:11:00.710: INFO: Waiting up to 5m0s for pod "busybox-user-0-cdc9221f-dd95-448d-9c74-d8c4c6a35f3e" in namespace "security-context-test-2896" to be "Succeeded or Failed" Jun 21 21:11:00.806: INFO: Pod "busybox-user-0-cdc9221f-dd95-448d-9c74-d8c4c6a35f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 96.414673ms Jun 21 21:11:02.924: INFO: Pod "busybox-user-0-cdc9221f-dd95-448d-9c74-d8c4c6a35f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213760391s Jun 21 21:11:05.027: INFO: Pod "busybox-user-0-cdc9221f-dd95-448d-9c74-d8c4c6a35f3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317510149s Jun 21 21:11:05.027: INFO: Pod "busybox-user-0-cdc9221f-dd95-448d-9c74-d8c4c6a35f3e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:11:05.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-2896" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a container with runAsUser [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50[0m should run the container with uid 0 [LinuxOnly] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":27,"skipped":191,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:05.227: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping ... skipping 77 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:11:08.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-7004" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":28,"skipped":198,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:08.638: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 89 lines ... Jun 21 21:10:56.832: INFO: PersistentVolumeClaim pvc-vlsw6 found but phase is Pending instead of Bound. Jun 21 21:10:58.933: INFO: PersistentVolumeClaim pvc-vlsw6 found and phase=Bound (12.926293267s) Jun 21 21:10:58.934: INFO: Waiting up to 3m0s for PersistentVolume local-hl4cb to have phase Bound Jun 21 21:10:59.034: INFO: PersistentVolume local-hl4cb found and phase=Bound (100.088829ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tvbf [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:10:59.346: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tvbf" in namespace "provisioning-8474" to be "Succeeded or Failed" Jun 21 21:10:59.469: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf": Phase="Pending", Reason="", readiness=false. Elapsed: 123.744296ms Jun 21 21:11:01.569: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223650155s Jun 21 21:11:03.686: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34028381s Jun 21 21:11:05.786: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.440491463s [1mSTEP[0m: Saw pod success Jun 21 21:11:05.786: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf" satisfied condition "Succeeded or Failed" Jun 21 21:11:05.890: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-tvbf container test-container-subpath-preprovisionedpv-tvbf: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:06.128: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tvbf to disappear Jun 21 21:11:06.226: INFO: Pod pod-subpath-test-preprovisionedpv-tvbf no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tvbf Jun 21 21:11:06.226: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tvbf" in namespace "provisioning-8474" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tvbf [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:11:06.479: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tvbf" in namespace "provisioning-8474" to be "Succeeded or Failed" Jun 21 21:11:06.583: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf": Phase="Pending", Reason="", readiness=false. Elapsed: 104.180054ms Jun 21 21:11:08.683: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203957896s [1mSTEP[0m: Saw pod success Jun 21 21:11:08.683: INFO: Pod "pod-subpath-test-preprovisionedpv-tvbf" satisfied condition "Succeeded or Failed" Jun 21 21:11:08.784: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-tvbf container test-container-subpath-preprovisionedpv-tvbf: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:09.004: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tvbf to disappear Jun 21 21:11:09.102: INFO: Pod pod-subpath-test-preprovisionedpv-tvbf no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tvbf Jun 21 21:11:09.102: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tvbf" in namespace "provisioning-8474" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":31,"skipped":296,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:10.781: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 118 lines ... Jun 21 21:01:07.781: INFO: Waiting up to 2m0s for PersistentVolume pvc-c62418b6-2acf-4cd9-9a61-71922562d28b to get deleted Jun 21 21:01:07.878: INFO: PersistentVolume pvc-c62418b6-2acf-4cd9-9a61-71922562d28b was removed [1mSTEP[0m: Deleting storageclass csi-mock-volumes-8179-scc4bhv [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-8179 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-8179] to vanish Jun 21 21:06:08.434: INFO: error deleting namespace csi-mock-volumes-8179: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:06:08.434: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8179-3336/csi-attacher Jun 21 21:06:08.533: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8179 Jun 21 21:06:08.632: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8179 Jun 21 21:06:08.745: INFO: deleting *v1.Role: csi-mock-volumes-8179-3336/external-attacher-cfg-csi-mock-volumes-8179 Jun 21 21:06:08.845: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8179-3336/csi-attacher-role-cfg ... skipping 21 lines ... Jun 21 21:06:11.161: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8179 Jun 21 21:06:11.267: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8179 Jun 21 21:06:11.366: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8179-3336/csi-mockplugin Jun 21 21:06:11.469: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8179-3336/csi-mockplugin-attacher [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-8179-3336 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-8179-3336] to vanish Jun 21 21:11:12.208: INFO: error deleting namespace csi-mock-volumes-8179-3336: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:11:12.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-8179" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-8179-3336" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI Volume expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:641[0m should not expand volume if resizingOnDriver=off, resizingOnSC=on [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:670[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":29,"skipped":278,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:12.532: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 80 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should not deadlock when a pod's predecessor fails [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:254[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":32,"skipped":298,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:17.200: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 130 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should return command exit codes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499[0m running a failing command [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:519[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":25,"skipped":207,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:18.995: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 48 lines ... Jun 21 21:11:10.788: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support existing directory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205 Jun 21 21:11:11.301: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 21 21:11:11.572: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4089" in namespace "provisioning-4089" to be "Succeeded or Failed" Jun 21 21:11:11.733: INFO: Pod "hostpath-symlink-prep-provisioning-4089": Phase="Pending", Reason="", readiness=false. Elapsed: 160.174849ms Jun 21 21:11:13.841: INFO: Pod "hostpath-symlink-prep-provisioning-4089": Phase="Running", Reason="", readiness=true. Elapsed: 2.268429465s Jun 21 21:11:15.946: INFO: Pod "hostpath-symlink-prep-provisioning-4089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.373393424s [1mSTEP[0m: Saw pod success Jun 21 21:11:15.946: INFO: Pod "hostpath-symlink-prep-provisioning-4089" satisfied condition "Succeeded or Failed" Jun 21 21:11:15.946: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4089" in namespace "provisioning-4089" Jun 21 21:11:16.051: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4089" to be fully deleted Jun 21 21:11:16.150: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-4hfx [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:11:16.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4hfx" in namespace "provisioning-4089" to be "Succeeded or Failed" Jun 21 21:11:16.537: INFO: Pod "pod-subpath-test-inlinevolume-4hfx": Phase="Pending", Reason="", readiness=false. Elapsed: 281.118721ms Jun 21 21:11:18.637: INFO: Pod "pod-subpath-test-inlinevolume-4hfx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381433879s Jun 21 21:11:20.737: INFO: Pod "pod-subpath-test-inlinevolume-4hfx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.480659279s [1mSTEP[0m: Saw pod success Jun 21 21:11:20.737: INFO: Pod "pod-subpath-test-inlinevolume-4hfx" satisfied condition "Succeeded or Failed" Jun 21 21:11:20.835: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-4hfx container test-container-volume-inlinevolume-4hfx: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:21.086: INFO: Waiting for pod pod-subpath-test-inlinevolume-4hfx to disappear Jun 21 21:11:21.187: INFO: Pod pod-subpath-test-inlinevolume-4hfx no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-4hfx Jun 21 21:11:21.187: INFO: Deleting pod "pod-subpath-test-inlinevolume-4hfx" in namespace "provisioning-4089" [1mSTEP[0m: Deleting pod Jun 21 21:11:21.285: INFO: Deleting pod "pod-subpath-test-inlinevolume-4hfx" in namespace "provisioning-4089" Jun 21 21:11:21.503: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4089" in namespace "provisioning-4089" to be "Succeeded or Failed" Jun 21 21:11:21.604: INFO: Pod "hostpath-symlink-prep-provisioning-4089": Phase="Pending", Reason="", readiness=false. Elapsed: 100.431437ms Jun 21 21:11:23.742: INFO: Pod "hostpath-symlink-prep-provisioning-4089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.239093878s [1mSTEP[0m: Saw pod success Jun 21 21:11:23.742: INFO: Pod "hostpath-symlink-prep-provisioning-4089" satisfied condition "Succeeded or Failed" Jun 21 21:11:23.742: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4089" in namespace "provisioning-4089" Jun 21 21:11:23.882: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4089" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:11:23.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-4089" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":32,"skipped":303,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:24.194: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 28 lines ... [It] should support readOnly directory specified in the volumeMount /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365 Jun 21 21:11:19.515: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jun 21 21:11:19.613: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-trtm [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:11:19.724: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-trtm" in namespace "provisioning-7669" to be "Succeeded or Failed" Jun 21 21:11:19.829: INFO: Pod "pod-subpath-test-inlinevolume-trtm": Phase="Pending", Reason="", readiness=false. Elapsed: 104.961028ms Jun 21 21:11:21.935: INFO: Pod "pod-subpath-test-inlinevolume-trtm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21100926s Jun 21 21:11:24.033: INFO: Pod "pod-subpath-test-inlinevolume-trtm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308574272s [1mSTEP[0m: Saw pod success Jun 21 21:11:24.033: INFO: Pod "pod-subpath-test-inlinevolume-trtm" satisfied condition "Succeeded or Failed" Jun 21 21:11:24.130: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-trtm container test-container-subpath-inlinevolume-trtm: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:24.350: INFO: Waiting for pod pod-subpath-test-inlinevolume-trtm to disappear Jun 21 21:11:24.448: INFO: Pod pod-subpath-test-inlinevolume-trtm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-trtm Jun 21 21:11:24.448: INFO: Deleting pod "pod-subpath-test-inlinevolume-trtm" in namespace "provisioning-7669" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":26,"skipped":221,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes ... skipping 28 lines ... Jun 21 21:11:26.117: INFO: PersistentVolumeClaim pvc-6585j found but phase is Pending instead of Bound. Jun 21 21:11:28.214: INFO: PersistentVolumeClaim pvc-6585j found and phase=Bound (14.8135209s) Jun 21 21:11:28.215: INFO: Waiting up to 3m0s for PersistentVolume local-v588t to have phase Bound Jun 21 21:11:28.346: INFO: PersistentVolume local-v588t found and phase=Bound (131.860366ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-xw8l [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 21:11:28.760: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-xw8l" in namespace "volume-5459" to be "Succeeded or Failed" Jun 21 21:11:28.860: INFO: Pod "exec-volume-test-preprovisionedpv-xw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 99.435612ms Jun 21 21:11:30.957: INFO: Pod "exec-volume-test-preprovisionedpv-xw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196941172s Jun 21 21:11:33.072: INFO: Pod "exec-volume-test-preprovisionedpv-xw8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311986791s [1mSTEP[0m: Saw pod success Jun 21 21:11:33.072: INFO: Pod "exec-volume-test-preprovisionedpv-xw8l" satisfied condition "Succeeded or Failed" Jun 21 21:11:33.181: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-xw8l container exec-container-preprovisionedpv-xw8l: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:33.388: INFO: Waiting for pod exec-volume-test-preprovisionedpv-xw8l to disappear Jun 21 21:11:33.485: INFO: Pod exec-volume-test-preprovisionedpv-xw8l no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-xw8l Jun 21 21:11:33.485: INFO: Deleting pod "exec-volume-test-preprovisionedpv-xw8l" in namespace "volume-5459" ... skipping 28 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (ext4)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":29,"skipped":211,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:11:36.116: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name projected-secret-test-a7b55eb4-aca7-42b6-a5fc-4ccadb710800 [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 21:11:36.803: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8" in namespace "projected-198" to be "Succeeded or Failed" Jun 21 21:11:36.900: INFO: Pod "pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 97.26413ms Jun 21 21:11:38.997: INFO: Pod "pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193774514s Jun 21 21:11:41.097: INFO: Pod "pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294355855s [1mSTEP[0m: Saw pod success Jun 21 21:11:41.097: INFO: Pod "pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8" satisfied condition "Succeeded or Failed" Jun 21 21:11:41.193: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:41.412: INFO: Waiting for pod pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8 to disappear Jun 21 21:11:41.510: INFO: Pod pod-projected-secrets-96e323a9-3976-4ecb-95be-c1ed41e7b9d8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.638 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":214,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 101 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":33,"skipped":322,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath ... skipping 9 lines ... Jun 21 21:11:17.704: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-2468hhqz [1mSTEP[0m: creating a claim Jun 21 21:11:17.802: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-rfwm [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:11:18.096: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rfwm" in namespace "provisioning-246" to be "Succeeded or Failed" Jun 21 21:11:18.193: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 96.749533ms Jun 21 21:11:20.318: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221341916s Jun 21 21:11:22.434: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337283075s Jun 21 21:11:24.537: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440724713s Jun 21 21:11:26.634: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537552247s Jun 21 21:11:28.733: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63706176s Jun 21 21:11:30.840: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743946287s Jun 21 21:11:32.960: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.863480949s Jun 21 21:11:35.061: INFO: Pod "pod-subpath-test-dynamicpv-rfwm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.964270531s [1mSTEP[0m: Saw pod success Jun 21 21:11:35.061: INFO: Pod "pod-subpath-test-dynamicpv-rfwm" satisfied condition "Succeeded or Failed" Jun 21 21:11:35.158: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-rfwm container test-container-volume-dynamicpv-rfwm: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:35.382: INFO: Waiting for pod pod-subpath-test-dynamicpv-rfwm to disappear Jun 21 21:11:35.478: INFO: Pod pod-subpath-test-dynamicpv-rfwm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-rfwm Jun 21 21:11:35.478: INFO: Deleting pod "pod-subpath-test-dynamicpv-rfwm" in namespace "provisioning-246" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":33,"skipped":303,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:46.761: INFO: Only supported for providers [azure] (not aws) ... skipping 30 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:11:48.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-6721" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":34,"skipped":305,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:48.277: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 102 lines ... Jun 21 21:11:41.232: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-3171, hostIp: 172.20.0.148, podIp: 100.96.8.205, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:10 +0000 UTC }]" } Jun 21 21:11:43.357: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.8.205:9080/dial?request=hostName&protocol=udp&host=100.68.227.147&port=90&tries=1'] Namespace:nettest-3171 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 21 21:11:43.357: INFO: >>> kubeConfig: /root/.kube/config Jun 21 21:11:43.358: INFO: ExecWithOptions: Clientset creation Jun 21 21:11:43.358: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-3171/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F100.96.8.205%3A9080%2Fdial%3Frequest%3DhostName%26protocol%3Dudp%26host%3D100.68.227.147%26port%3D90%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jun 21 21:11:44.056: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-3171, hostIp: 172.20.0.148, podIp: 100.96.8.205, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-21 21:11:10 +0000 UTC }]" } Jun 21 21:11:46.057: FAIL: Unexpected endpoints return: map[netserver-0:{} netserver-1:{} netserver-2:{} netserver-3:{}], expect 1 endpoints Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f6d57) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 ... skipping 324 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:426[0m [91mJun 21 21:11:46.057: Unexpected endpoints return: map[netserver-0:{} netserver-1:{} netserver-2:{} netserver-3:{}], expect 1 endpoints[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [90m------------------------------[0m {"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":-1,"completed":33,"skipped":254,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 49 lines ... Jun 21 21:11:48.283: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium Jun 21 21:11:48.975: INFO: Waiting up to 5m0s for pod "pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f" in namespace "emptydir-4451" to be "Succeeded or Failed" Jun 21 21:11:49.072: INFO: Pod "pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 97.029989ms Jun 21 21:11:51.179: INFO: Pod "pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203792476s Jun 21 21:11:53.279: INFO: Pod "pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.30380406s [1mSTEP[0m: Saw pod success Jun 21 21:11:53.279: INFO: Pod "pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f" satisfied condition "Succeeded or Failed" Jun 21 21:11:53.376: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:11:53.581: INFO: Waiting for pod pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f to disappear Jun 21 21:11:53.688: INFO: Pod pod-eee09ae0-33dc-4503-a894-f8582ebd8a0f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.601 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":318,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:53.886: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 87 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":31,"skipped":221,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:54.205: INFO: Only supported for providers [vsphere] (not aws) ... skipping 178 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:11:56.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-8472" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":223,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:56.909: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 190 lines ... [32m• [SLOW TEST:7.017 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should create pods for an Indexed job with completion indexes and specified hostname [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:150[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":34,"skipped":260,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:11:58.766: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 49 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:11:59.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-7900" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":35,"skipped":267,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:5.742 seconds][0m [sig-apps] ReplicaSet [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should list and delete a collection of ReplicaSets [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":36,"skipped":334,"failed":0} [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:12:00.385: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename endpointslicemirroring [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 8 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:12:01.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslicemirroring-3403" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":37,"skipped":334,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 28 lines ... [32m• [SLOW TEST:11.371 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate custom resource with pruning [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":32,"skipped":239,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [32m• [SLOW TEST:9.361 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should ensure a single API token exists [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":28,"skipped":255,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 26 lines ... [32m• [SLOW TEST:7.007 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should release NodePorts on delete [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1585[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":38,"skipped":336,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [32m• [SLOW TEST:9.779 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to deny custom resource creation, update and deletion [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":29,"skipped":260,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:16.105: INFO: Only supported for providers [gce gke] (not aws) ... skipping 91 lines ... Jun 21 21:02:26.726: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ckfg2] to have phase Bound Jun 21 21:02:26.825: INFO: PersistentVolumeClaim pvc-ckfg2 found and phase=Bound (99.569169ms) [1mSTEP[0m: Deleting the previously created pod Jun 21 21:02:31.357: INFO: Deleting pod "pvc-volume-tester-4k5kb" in namespace "csi-mock-volumes-4712" Jun 21 21:02:31.482: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4k5kb" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 21 21:02:37.816: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/72f0ec12-211f-47a8-8a0b-bc0245dcbc7a/volumes/kubernetes.io~csi/pvc-cd8e28bf-c6a4-4591-8a73-1b6941adbef1/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-4k5kb Jun 21 21:02:37.816: INFO: Deleting pod "pvc-volume-tester-4k5kb" in namespace "csi-mock-volumes-4712" [1mSTEP[0m: Deleting claim pvc-ckfg2 Jun 21 21:02:38.119: INFO: Waiting up to 2m0s for PersistentVolume pvc-cd8e28bf-c6a4-4591-8a73-1b6941adbef1 to get deleted Jun 21 21:02:38.222: INFO: PersistentVolume pvc-cd8e28bf-c6a4-4591-8a73-1b6941adbef1 found and phase=Released (103.174227ms) Jun 21 21:02:40.333: INFO: PersistentVolume pvc-cd8e28bf-c6a4-4591-8a73-1b6941adbef1 was removed [1mSTEP[0m: Deleting storageclass csi-mock-volumes-4712-sch4q9f [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-4712 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-4712] to vanish Jun 21 21:07:40.978: INFO: error deleting namespace csi-mock-volumes-4712: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:07:40.978: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4712-579/csi-attacher Jun 21 21:07:41.084: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4712 Jun 21 21:07:41.188: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4712 Jun 21 21:07:41.293: INFO: deleting *v1.Role: csi-mock-volumes-4712-579/external-attacher-cfg-csi-mock-volumes-4712 Jun 21 21:07:41.391: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4712-579/csi-attacher-role-cfg ... skipping 21 lines ... Jun 21 21:07:43.657: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4712 Jun 21 21:07:43.755: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4712 Jun 21 21:07:43.854: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4712-579/csi-mockplugin Jun 21 21:07:43.956: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4712-579/csi-mockplugin-attacher [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-4712-579 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-4712-579] to vanish Jun 21 21:12:44.552: INFO: error deleting namespace csi-mock-volumes-4712-579: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:12:44.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-4712" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-4712-579" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIServiceAccountToken [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1576[0m token should not be plumbed down when CSIDriver is not deployed [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1604[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":27,"skipped":183,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:44.860: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 57 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:12:48.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-7812" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":28,"skipped":198,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:48.650: INFO: Only supported for providers [vsphere] (not aws) ... skipping 147 lines ... Jun 21 21:12:06.405: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-7996frr94 [1mSTEP[0m: creating a claim Jun 21 21:12:06.526: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-4m69 [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:12:06.820: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4m69" in namespace "provisioning-7996" to be "Succeeded or Failed" Jun 21 21:12:06.916: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 96.411685ms Jun 21 21:12:09.013: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193101568s Jun 21 21:12:11.110: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290006888s Jun 21 21:12:13.217: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397201446s Jun 21 21:12:15.320: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499807222s Jun 21 21:12:17.420: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 10.599861282s ... skipping 2 lines ... Jun 21 21:12:23.728: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 16.908378925s Jun 21 21:12:25.828: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 19.008191422s Jun 21 21:12:27.933: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 21.112982726s Jun 21 21:12:30.031: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Pending", Reason="", readiness=false. Elapsed: 23.211432667s Jun 21 21:12:32.131: INFO: Pod "pod-subpath-test-dynamicpv-4m69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.311093219s [1mSTEP[0m: Saw pod success Jun 21 21:12:32.131: INFO: Pod "pod-subpath-test-dynamicpv-4m69" satisfied condition "Succeeded or Failed" Jun 21 21:12:32.227: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-4m69 container test-container-volume-dynamicpv-4m69: <nil> [1mSTEP[0m: delete the pod Jun 21 21:12:32.453: INFO: Waiting for pod pod-subpath-test-dynamicpv-4m69 to disappear Jun 21 21:12:32.550: INFO: Pod pod-subpath-test-dynamicpv-4m69 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-4m69 Jun 21 21:12:32.550: INFO: Deleting pod "pod-subpath-test-dynamicpv-4m69" in namespace "provisioning-7996" ... skipping 153 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":39,"skipped":339,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:55.950: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 98 lines ... Jun 21 21:02:50.576: INFO: Deleting PersistentVolumeClaim "csi-hostpathvpfv6" Jun 21 21:02:50.678: INFO: Waiting up to 5m0s for PersistentVolume pvc-a1445516-a2a4-4a8d-b82b-c2d2f00bcae2 to get deleted Jun 21 21:02:50.778: INFO: PersistentVolume pvc-a1445516-a2a4-4a8d-b82b-c2d2f00bcae2 was removed [1mSTEP[0m: Deleting sc [1mSTEP[0m: deleting the test namespace: volume-expand-2526 [1mSTEP[0m: Waiting for namespaces [volume-expand-2526] to vanish Jun 21 21:07:51.378: INFO: error deleting namespace volume-expand-2526: timed out waiting for the condition [1mSTEP[0m: uninstalling csi csi-hostpath driver Jun 21 21:07:51.378: INFO: deleting *v1.ServiceAccount: volume-expand-2526-983/csi-attacher Jun 21 21:07:51.498: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-2526 Jun 21 21:07:51.595: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-2526 Jun 21 21:07:51.692: INFO: deleting *v1.Role: volume-expand-2526-983/external-attacher-cfg-volume-expand-2526 Jun 21 21:07:51.790: INFO: deleting *v1.RoleBinding: volume-expand-2526-983/csi-attacher-role-cfg ... skipping 30 lines ... Jun 21 21:07:55.074: INFO: deleting *v1.RoleBinding: volume-expand-2526-983/csi-hostpathplugin-resizer-role Jun 21 21:07:55.172: INFO: deleting *v1.RoleBinding: volume-expand-2526-983/csi-hostpathplugin-snapshotter-role Jun 21 21:07:55.273: INFO: deleting *v1.StatefulSet: volume-expand-2526-983/csi-hostpathplugin Jun 21 21:07:55.373: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-2526 [1mSTEP[0m: deleting the driver namespace: volume-expand-2526-983 [1mSTEP[0m: Waiting for namespaces [volume-expand-2526-983] to vanish Jun 21 21:12:55.951: INFO: error deleting namespace volume-expand-2526-983: timed out waiting for the condition [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:12:55.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-expand-2526" for this suite. [1mSTEP[0m: Destroying namespace "volume-expand-2526-983" for this suite. ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m Verify if offline PVC expansion works [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":6,"skipped":49,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:56.251: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 77 lines ... Jun 21 21:02:49.517: INFO: PersistentVolume pvc-39a19e20-da09-48ed-ab9d-79ce71d2622c found and phase=Released (6.434890131s) Jun 21 21:02:51.615: INFO: PersistentVolume pvc-39a19e20-da09-48ed-ab9d-79ce71d2622c was removed [1mSTEP[0m: Deleting storageclass csi-mock-volumes-8824-scq9gvm [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-8824 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-8824] to vanish Jun 21 21:07:52.166: INFO: error deleting namespace csi-mock-volumes-8824: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:07:52.166: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8824-2727/csi-attacher Jun 21 21:07:52.264: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8824 Jun 21 21:07:52.367: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8824 Jun 21 21:07:52.495: INFO: deleting *v1.Role: csi-mock-volumes-8824-2727/external-attacher-cfg-csi-mock-volumes-8824 Jun 21 21:07:52.594: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8824-2727/csi-attacher-role-cfg ... skipping 22 lines ... Jun 21 21:07:54.999: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8824 Jun 21 21:07:55.097: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8824-2727/csi-mockplugin Jun 21 21:07:55.195: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8824 Jun 21 21:07:55.337: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8824-2727/csi-mockplugin-attacher [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-8824-2727 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-8824-2727] to vanish Jun 21 21:12:55.928: INFO: error deleting namespace csi-mock-volumes-8824-2727: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:12:56.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-8824" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-8824-2727" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI attach test using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332[0m should require VolumeAttach for drivers with attachment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:360[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":23,"skipped":186,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:56.343: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 115 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should support inline execution and attach [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:563[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":36,"skipped":269,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:12:57.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-7559" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":24,"skipped":198,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:57.439: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 133 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 21:12:56.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db0dce0a-feb8-4c3c-a40a-596f8bb26b7a" in namespace "projected-7062" to be "Succeeded or Failed" Jun 21 21:12:56.980: INFO: Pod "downwardapi-volume-db0dce0a-feb8-4c3c-a40a-596f8bb26b7a": Phase="Pending", Reason="", readiness=false. Elapsed: 96.884128ms Jun 21 21:12:59.109: INFO: Pod "downwardapi-volume-db0dce0a-feb8-4c3c-a40a-596f8bb26b7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.225330307s [1mSTEP[0m: Saw pod success Jun 21 21:12:59.109: INFO: Pod "downwardapi-volume-db0dce0a-feb8-4c3c-a40a-596f8bb26b7a" satisfied condition "Succeeded or Failed" Jun 21 21:12:59.208: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod downwardapi-volume-db0dce0a-feb8-4c3c-a40a-596f8bb26b7a container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:12:59.496: INFO: Waiting for pod downwardapi-volume-db0dce0a-feb8-4c3c-a40a-596f8bb26b7a to disappear Jun 21 21:12:59.602: INFO: Pod downwardapi-volume-db0dce0a-feb8-4c3c-a40a-596f8bb26b7a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:12:59.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-7062" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":50,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:12:59.822: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 162 lines ... [32m• [SLOW TEST:12.572 seconds][0m [sig-api-machinery] Watchers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should observe an object deletion if it stops meeting the requirements of the selector [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":25,"skipped":211,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:10.035: INFO: Only supported for providers [azure] (not aws) ... skipping 61 lines ... Jun 21 21:02:46.765: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8233 Jun 21 21:02:46.862: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8233 Jun 21 21:02:46.959: INFO: creating *v1.StatefulSet: csi-mock-volumes-8233-1086/csi-mockplugin Jun 21 21:02:47.063: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8233 Jun 21 21:02:47.163: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8233" Jun 21 21:02:47.274: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8233 to register on node ip-172-20-0-54.eu-west-2.compute.internal I0621 21:02:50.473286 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0621 21:02:50.578110 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8233","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0621 21:02:50.677151 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0621 21:02:50.775475 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0621 21:02:50.987424 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8233","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0621 21:02:51.724578 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8233"},"Error":"","FullError":null} [1mSTEP[0m: Creating pod Jun 21 21:02:52.808: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0621 21:02:53.105211 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-514afbca-3319-4b52-afdd-dab74a18b5d9","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0621 21:02:53.213908 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-514afbca-3319-4b52-afdd-dab74a18b5d9","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-514afbca-3319-4b52-afdd-dab74a18b5d9"}}},"Error":"","FullError":null} I0621 21:02:54.322483 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0621 21:02:54.425397 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0621 21:02:54.524725 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 21 21:02:54.621: INFO: >>> kubeConfig: /root/.kube/config Jun 21 21:02:54.622: INFO: ExecWithOptions: Clientset creation Jun 21 21:02:54.622: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8233-1086/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fpv%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fpv%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING)) I0621 21:02:55.298329 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-514afbca-3319-4b52-afdd-dab74a18b5d9/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-514afbca-3319-4b52-afdd-dab74a18b5d9","storage.kubernetes.io/csiProvisionerIdentity":"1655845370829-8081-csi-mock-csi-mock-volumes-8233"}},"Response":{},"Error":"","FullError":null} I0621 21:02:55.499957 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0621 21:02:55.598211 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0621 21:02:55.699053 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 21 21:02:55.796: INFO: >>> kubeConfig: /root/.kube/config Jun 21 21:02:55.797: INFO: ExecWithOptions: Clientset creation Jun 21 21:02:55.797: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8233-1086/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F54a81fec-67e2-4fc3-9b1a-0f9b00245672%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F54a81fec-67e2-4fc3-9b1a-0f9b00245672%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING)) Jun 21 21:02:56.447: INFO: >>> kubeConfig: /root/.kube/config Jun 21 21:02:56.449: INFO: ExecWithOptions: Clientset creation Jun 21 21:02:56.449: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8233-1086/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F54a81fec-67e2-4fc3-9b1a-0f9b00245672%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F54a81fec-67e2-4fc3-9b1a-0f9b00245672%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING)) Jun 21 21:02:57.103: INFO: >>> kubeConfig: /root/.kube/config Jun 21 21:02:57.104: INFO: ExecWithOptions: Clientset creation Jun 21 21:02:57.104: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8233-1086/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F54a81fec-67e2-4fc3-9b1a-0f9b00245672%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING)) I0621 21:02:57.761039 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-514afbca-3319-4b52-afdd-dab74a18b5d9/globalmount","target_path":"/var/lib/kubelet/pods/54a81fec-67e2-4fc3-9b1a-0f9b00245672/volumes/kubernetes.io~csi/pvc-514afbca-3319-4b52-afdd-dab74a18b5d9/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-514afbca-3319-4b52-afdd-dab74a18b5d9","storage.kubernetes.io/csiProvisionerIdentity":"1655845370829-8081-csi-mock-csi-mock-volumes-8233"}},"Response":{},"Error":"","FullError":null} I0621 21:03:01.077114 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0621 21:03:01.206079 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/54a81fec-67e2-4fc3-9b1a-0f9b00245672/volumes/kubernetes.io~csi/pvc-514afbca-3319-4b52-afdd-dab74a18b5d9/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Jun 21 21:03:01.258: INFO: Deleting pod "pvc-volume-tester-sphzl" in namespace "csi-mock-volumes-8233" Jun 21 21:03:01.360: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sphzl" to be fully deleted Jun 21 21:03:01.800: INFO: >>> kubeConfig: /root/.kube/config Jun 21 21:03:01.801: INFO: ExecWithOptions: Clientset creation Jun 21 21:03:01.801: INFO: ExecWithOptions: execute(POST https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8233-1086/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F54a81fec-67e2-4fc3-9b1a-0f9b00245672%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-514afbca-3319-4b52-afdd-dab74a18b5d9%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true %!s(MISSING)) I0621 21:03:02.474081 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/54a81fec-67e2-4fc3-9b1a-0f9b00245672/volumes/kubernetes.io~csi/pvc-514afbca-3319-4b52-afdd-dab74a18b5d9/mount"},"Response":{},"Error":"","FullError":null} I0621 21:03:02.613438 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0621 21:03:02.712579 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-514afbca-3319-4b52-afdd-dab74a18b5d9/globalmount"},"Response":{},"Error":"","FullError":null} I0621 21:03:03.673272 7174 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} [1mSTEP[0m: Checking PVC events Jun 21 21:03:04.656: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jjhms", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8233", SelfLink:"", UID:"514afbca-3319-4b52-afdd-dab74a18b5d9", ResourceVersion:"49746", Generation:0, CreationTimestamp:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002bfe300), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003444ac0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003444ad0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}} Jun 21 21:03:04.657: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jjhms", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8233", SelfLink:"", UID:"514afbca-3319-4b52-afdd-dab74a18b5d9", ResourceVersion:"49749", Generation:0, CreationTimestamp:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-0-54.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00267c378), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00267c3a8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0083ca350), VolumeMode:(*v1.PersistentVolumeMode)(0xc0083ca360), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}} Jun 21 21:03:04.657: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jjhms", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8233", SelfLink:"", UID:"514afbca-3319-4b52-afdd-dab74a18b5d9", ResourceVersion:"49750", Generation:0, CreationTimestamp:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8233", "volume.kubernetes.io/selected-node":"ip-172-20-0-54.eu-west-2.compute.internal", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8233"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00635f2f0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00635f320), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00635f350), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc006cf97c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc006cf97d0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}} Jun 21 21:03:04.657: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jjhms", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8233", SelfLink:"", UID:"514afbca-3319-4b52-afdd-dab74a18b5d9", ResourceVersion:"49754", Generation:0, CreationTimestamp:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8233", "volume.kubernetes.io/selected-node":"ip-172-20-0-54.eu-west-2.compute.internal", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8233"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021f4198), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021f41c8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021f41f8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-514afbca-3319-4b52-afdd-dab74a18b5d9", StorageClassName:(*string)(0xc009304cb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc009304cc0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}} Jun 21 21:03:04.657: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jjhms", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8233", SelfLink:"", UID:"514afbca-3319-4b52-afdd-dab74a18b5d9", ResourceVersion:"49755", Generation:0, CreationTimestamp:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8233", "volume.kubernetes.io/selected-node":"ip-172-20-0-54.eu-west-2.compute.internal", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8233"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021f4240), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021f4270), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021f42a0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 21, 21, 2, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021f42d0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-514afbca-3319-4b52-afdd-dab74a18b5d9", StorageClassName:(*string)(0xc009304d00), VolumeMode:(*v1.PersistentVolumeMode)(0xc009304d10), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}} ... skipping 3 lines ... Jun 21 21:03:04.658: INFO: Deleting pod "pvc-volume-tester-sphzl" in namespace "csi-mock-volumes-8233" [1mSTEP[0m: Deleting claim pvc-jjhms [1mSTEP[0m: Deleting storageclass csi-mock-volumes-8233-sc8jzs7 [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-8233 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-8233] to vanish Jun 21 21:08:12.143: INFO: error deleting namespace csi-mock-volumes-8233: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:08:12.143: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8233-1086/csi-attacher Jun 21 21:08:12.242: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8233 Jun 21 21:08:12.341: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8233 Jun 21 21:08:12.450: INFO: deleting *v1.Role: csi-mock-volumes-8233-1086/external-attacher-cfg-csi-mock-volumes-8233 Jun 21 21:08:12.550: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8233-1086/csi-attacher-role-cfg ... skipping 21 lines ... Jun 21 21:08:15.016: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8233 Jun 21 21:08:15.115: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8233 Jun 21 21:08:15.216: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8233-1086/csi-mockplugin Jun 21 21:08:15.317: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8233 [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-8233-1086 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-8233-1086] to vanish Jun 21 21:13:16.005: INFO: error deleting namespace csi-mock-volumes-8233-1086: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:13:16.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-8233" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-8233-1086" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m storage capacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1102[0m exhausted, late binding, no topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1160[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":8,"skipped":35,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:16.628: INFO: Only supported for providers [gce gke] (not aws) ... skipping 39 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m When pod refers to non-existent ephemeral storage [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53[0m should allow deletion of pod with invalid volume : projected [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":30,"skipped":279,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:19.669: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 136 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl [1mSTEP[0m: Watching for error events or started pod [1mSTEP[0m: Waiting for pod completion [1mSTEP[0m: Checking that the pod succeeded [1mSTEP[0m: Getting logs from the pod [1mSTEP[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.349 seconds][0m [sig-node] Sysctls [LinuxOnly] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should support sysctls [MinimumKubeletVersion:1.21] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":9,"skipped":39,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 29 lines ... [32m• [SLOW TEST:23.012 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for CRD preserving unknown fields at the schema root [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:22.884: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 81 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:13:22.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-1111" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":31,"skipped":296,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral ... skipping 62 lines ... [1mSTEP[0m: Destroying namespace "services-1764" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":32,"skipped":302,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:24.985: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 35 lines ... [36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":30,"skipped":268,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:13:23.044: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 14 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:13:27.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-922" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":31,"skipped":268,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:27.536: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 239 lines ... [32m• [SLOW TEST:9.564 seconds][0m [sig-api-machinery] ServerSideApply [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should work for CRDs [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":10,"skipped":41,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:08:30.042: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename volume-provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:741 [1mSTEP[0m: creating a StorageClass [1mSTEP[0m: Creating a StorageClass [1mSTEP[0m: creating a claim object with a suffix for gluster dynamic provisioner Jun 21 21:08:30.721: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 21 21:13:31.346: INFO: The test missed event about failed provisioning, but checked that no volume was provisioned for 5m0s Jun 21 21:13:31.346: INFO: deleting claim "volume-provisioning-5766"/"pvc-7n29k" Jun 21 21:13:31.491: INFO: deleting storage class volume-provisioning-5766-invalid-aws28lgn [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:13:31.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-provisioning-5766" for this suite. [32m• [SLOW TEST:301.815 seconds][0m [sig-storage] Dynamic Provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Invalid AWS KMS key [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:740[0m should report an error and create no PV [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:741[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:13:24.991: INFO: >>> kubeConfig: /root/.kube/config ... skipping 24 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 21:13:29.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92" in namespace "projected-7070" to be "Succeeded or Failed" Jun 21 21:13:29.396: INFO: Pod "downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92": Phase="Pending", Reason="", readiness=false. Elapsed: 97.489281ms Jun 21 21:13:31.505: INFO: Pod "downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206453308s Jun 21 21:13:33.606: INFO: Pod "downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.307388158s [1mSTEP[0m: Saw pod success Jun 21 21:13:33.606: INFO: Pod "downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92" satisfied condition "Succeeded or Failed" Jun 21 21:13:33.719: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:13:33.956: INFO: Waiting for pod downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92 to disappear Jun 21 21:13:34.054: INFO: Pod downwardapi-volume-1bc55adb-4c61-40d8-a6be-c277de6cbe92 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.899 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":307,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:34.323: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 73 lines ... [32m• [SLOW TEST:24.306 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should serve multiport endpoints from pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":9,"skipped":86,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:47.203: INFO: Only supported for providers [openstack] (not aws) ... skipping 83 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":29,"skipped":227,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:50.288: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 35 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173[0m [36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":33,"skipped":313,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:13:33.636: INFO: >>> kubeConfig: /root/.kube/config ... skipping 16 lines ... Jun 21 21:13:42.009: INFO: PersistentVolumeClaim pvc-cwb28 found but phase is Pending instead of Bound. Jun 21 21:13:44.139: INFO: PersistentVolumeClaim pvc-cwb28 found and phase=Bound (6.501969861s) Jun 21 21:13:44.139: INFO: Waiting up to 3m0s for PersistentVolume local-jksp8 to have phase Bound Jun 21 21:13:44.244: INFO: PersistentVolume local-jksp8 found and phase=Bound (104.792763ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-7fsq [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 21:13:44.630: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-7fsq" in namespace "volume-2640" to be "Succeeded or Failed" Jun 21 21:13:44.740: INFO: Pod "exec-volume-test-preprovisionedpv-7fsq": Phase="Pending", Reason="", readiness=false. Elapsed: 109.723257ms Jun 21 21:13:46.874: INFO: Pod "exec-volume-test-preprovisionedpv-7fsq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243912969s Jun 21 21:13:48.973: INFO: Pod "exec-volume-test-preprovisionedpv-7fsq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.34283945s [1mSTEP[0m: Saw pod success Jun 21 21:13:48.973: INFO: Pod "exec-volume-test-preprovisionedpv-7fsq" satisfied condition "Succeeded or Failed" Jun 21 21:13:49.070: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-7fsq container exec-container-preprovisionedpv-7fsq: <nil> [1mSTEP[0m: delete the pod Jun 21 21:13:49.339: INFO: Waiting for pod exec-volume-test-preprovisionedpv-7fsq to disappear Jun 21 21:13:49.457: INFO: Pod exec-volume-test-preprovisionedpv-7fsq no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-7fsq Jun 21 21:13:49.457: INFO: Deleting pod "exec-volume-test-preprovisionedpv-7fsq" in namespace "volume-2640" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":34,"skipped":313,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:50.820: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 68 lines ... Jun 21 21:13:17.067: INFO: Unable to read jessie_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:17.165: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:17.265: INFO: Unable to read jessie_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:17.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:17.462: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:17.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:18.005: INFO: Lookups using dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5320 wheezy_tcp@dns-test-service.dns-5320 wheezy_udp@dns-test-service.dns-5320.svc wheezy_tcp@dns-test-service.dns-5320.svc wheezy_udp@_http._tcp.dns-test-service.dns-5320.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5320.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5320 jessie_tcp@dns-test-service.dns-5320 jessie_udp@dns-test-service.dns-5320.svc jessie_tcp@dns-test-service.dns-5320.svc jessie_udp@_http._tcp.dns-test-service.dns-5320.svc jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc] Jun 21 21:13:23.132: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:23.264: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:23.368: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:23.473: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:23.582: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) ... skipping 5 lines ... Jun 21 21:13:24.975: INFO: Unable to read jessie_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:25.073: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:25.183: INFO: Unable to read jessie_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:25.286: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:25.395: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:25.493: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:26.090: INFO: Lookups using dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5320 wheezy_tcp@dns-test-service.dns-5320 wheezy_udp@dns-test-service.dns-5320.svc wheezy_tcp@dns-test-service.dns-5320.svc wheezy_udp@_http._tcp.dns-test-service.dns-5320.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5320.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5320 jessie_tcp@dns-test-service.dns-5320 jessie_udp@dns-test-service.dns-5320.svc jessie_tcp@dns-test-service.dns-5320.svc jessie_udp@_http._tcp.dns-test-service.dns-5320.svc jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc] Jun 21 21:13:28.154: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:28.261: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:28.392: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:28.634: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:28.773: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) ... skipping 5 lines ... Jun 21 21:13:30.169: INFO: Unable to read jessie_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:30.500: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:30.746: INFO: Unable to read jessie_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:30.864: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:30.990: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:31.105: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:31.599: INFO: Lookups using dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5320 wheezy_tcp@dns-test-service.dns-5320 wheezy_udp@dns-test-service.dns-5320.svc wheezy_tcp@dns-test-service.dns-5320.svc wheezy_udp@_http._tcp.dns-test-service.dns-5320.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5320.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5320 jessie_tcp@dns-test-service.dns-5320 jessie_udp@dns-test-service.dns-5320.svc jessie_tcp@dns-test-service.dns-5320.svc jessie_udp@_http._tcp.dns-test-service.dns-5320.svc jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc] Jun 21 21:13:33.129: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:33.233: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:33.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:33.493: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:33.597: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) ... skipping 5 lines ... Jun 21 21:13:34.839: INFO: Unable to read jessie_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:34.966: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:35.082: INFO: Unable to read jessie_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:35.235: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:35.361: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:35.601: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:36.232: INFO: Lookups using dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5320 wheezy_tcp@dns-test-service.dns-5320 wheezy_udp@dns-test-service.dns-5320.svc wheezy_tcp@dns-test-service.dns-5320.svc wheezy_udp@_http._tcp.dns-test-service.dns-5320.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5320.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5320 jessie_tcp@dns-test-service.dns-5320 jessie_udp@dns-test-service.dns-5320.svc jessie_tcp@dns-test-service.dns-5320.svc jessie_udp@_http._tcp.dns-test-service.dns-5320.svc jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc] Jun 21 21:13:38.229: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:38.440: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:38.580: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:38.686: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:38.793: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) ... skipping 5 lines ... Jun 21 21:13:39.917: INFO: Unable to read jessie_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:40.018: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:40.116: INFO: Unable to read jessie_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:40.214: INFO: Unable to read jessie_tcp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:40.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:40.413: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:40.867: INFO: Lookups using dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5320 wheezy_tcp@dns-test-service.dns-5320 wheezy_udp@dns-test-service.dns-5320.svc wheezy_tcp@dns-test-service.dns-5320.svc wheezy_udp@_http._tcp.dns-test-service.dns-5320.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5320.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5320 jessie_tcp@dns-test-service.dns-5320 jessie_udp@dns-test-service.dns-5320.svc jessie_tcp@dns-test-service.dns-5320.svc jessie_udp@_http._tcp.dns-test-service.dns-5320.svc jessie_tcp@_http._tcp.dns-test-service.dns-5320.svc] Jun 21 21:13:43.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:43.214: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:43.315: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:43.427: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5320 from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:43.525: INFO: Unable to read wheezy_udp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:43.623: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:43.725: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:43.825: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5320.svc from pod dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e: the server could not find the requested resource (get pods dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e) Jun 21 21:13:45.656: INFO: Lookups using dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5320 wheezy_tcp@dns-test-service.dns-5320 wheezy_udp@dns-test-service.dns-5320.svc wheezy_tcp@dns-test-service.dns-5320.svc wheezy_udp@_http._tcp.dns-test-service.dns-5320.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5320.svc] Jun 21 21:13:50.534: INFO: DNS probes using dns-5320/dns-test-a2635e0c-4da5-4ed2-baeb-7d2fb6b7f23e succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test service [1mSTEP[0m: deleting the test headless service ... skipping 6 lines ... [32m• [SLOW TEST:41.352 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":218,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:51.404: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:13:52.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "discovery-6999" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":30,"skipped":244,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:53.159: INFO: Only supported for providers [gce gke] (not aws) ... skipping 91 lines ... Jun 21 21:13:42.260: INFO: PersistentVolumeClaim pvc-cdh7x found but phase is Pending instead of Bound. Jun 21 21:13:44.417: INFO: PersistentVolumeClaim pvc-cdh7x found and phase=Bound (2.259995348s) Jun 21 21:13:44.417: INFO: Waiting up to 3m0s for PersistentVolume local-l4lqp to have phase Bound Jun 21 21:13:44.517: INFO: PersistentVolume local-l4lqp found and phase=Bound (99.916361ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-wvjd [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:13:44.856: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wvjd" in namespace "provisioning-4880" to be "Succeeded or Failed" Jun 21 21:13:44.960: INFO: Pod "pod-subpath-test-preprovisionedpv-wvjd": Phase="Pending", Reason="", readiness=false. Elapsed: 104.20091ms Jun 21 21:13:47.061: INFO: Pod "pod-subpath-test-preprovisionedpv-wvjd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204518929s Jun 21 21:13:49.171: INFO: Pod "pod-subpath-test-preprovisionedpv-wvjd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.314656669s [1mSTEP[0m: Saw pod success Jun 21 21:13:49.171: INFO: Pod "pod-subpath-test-preprovisionedpv-wvjd" satisfied condition "Succeeded or Failed" Jun 21 21:13:49.269: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-wvjd container test-container-volume-preprovisionedpv-wvjd: <nil> [1mSTEP[0m: delete the pod Jun 21 21:13:49.509: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wvjd to disappear Jun 21 21:13:49.605: INFO: Pod pod-subpath-test-preprovisionedpv-wvjd no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-wvjd Jun 21 21:13:49.605: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wvjd" in namespace "provisioning-4880" ... skipping 34 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":33,"skipped":313,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral ... skipping 55 lines ... [1mSTEP[0m: checking the requested inline volume exists in the pod running on node {Name:ip-172-20-0-148.eu-west-2.compute.internal Selector:map[] Affinity:nil} Jun 21 21:03:14.205: INFO: Pod inline-volume-tester-9kq7m has the following logs: Jun 21 21:03:14.306: INFO: Deleting pod "inline-volume-tester-9kq7m" in namespace "ephemeral-383" Jun 21 21:03:14.416: INFO: Wait up to 5m0s for pod "inline-volume-tester-9kq7m" to be fully deleted [1mSTEP[0m: deleting the test namespace: ephemeral-383 [1mSTEP[0m: Waiting for namespaces [ephemeral-383] to vanish Jun 21 21:08:49.178: INFO: error deleting namespace ephemeral-383: timed out waiting for the condition [1mSTEP[0m: uninstalling csi csi-hostpath driver Jun 21 21:08:49.178: INFO: deleting *v1.ServiceAccount: ephemeral-383-9626/csi-attacher Jun 21 21:08:49.276: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-383 Jun 21 21:08:49.374: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-383 Jun 21 21:08:49.503: INFO: deleting *v1.Role: ephemeral-383-9626/external-attacher-cfg-ephemeral-383 Jun 21 21:08:49.753: INFO: deleting *v1.RoleBinding: ephemeral-383-9626/csi-attacher-role-cfg ... skipping 30 lines ... Jun 21 21:08:53.177: INFO: deleting *v1.RoleBinding: ephemeral-383-9626/csi-hostpathplugin-resizer-role Jun 21 21:08:53.276: INFO: deleting *v1.RoleBinding: ephemeral-383-9626/csi-hostpathplugin-snapshotter-role Jun 21 21:08:53.374: INFO: deleting *v1.StatefulSet: ephemeral-383-9626/csi-hostpathplugin Jun 21 21:08:53.473: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-383 [1mSTEP[0m: deleting the driver namespace: ephemeral-383-9626 [1mSTEP[0m: Waiting for namespaces [ephemeral-383-9626] to vanish Jun 21 21:13:54.336: INFO: error deleting namespace ephemeral-383-9626: timed out waiting for the condition [AfterEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:13:54.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ephemeral-383" for this suite. [1mSTEP[0m: Destroying namespace "ephemeral-383-9626" for this suite. ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:252[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":14,"skipped":95,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... [32m• [SLOW TEST:8.234 seconds][0m [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m will start an ephemeral container in an existing pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/ephemeral_containers.go:42[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod","total":-1,"completed":10,"skipped":101,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:13:55.460: INFO: Only supported for providers [gce gke] (not aws) ... skipping 23 lines ... Jun 21 21:13:50.826: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test env composition Jun 21 21:13:51.652: INFO: Waiting up to 5m0s for pod "var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891" in namespace "var-expansion-1340" to be "Succeeded or Failed" Jun 21 21:13:51.761: INFO: Pod "var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891": Phase="Pending", Reason="", readiness=false. Elapsed: 108.87432ms Jun 21 21:13:53.998: INFO: Pod "var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3457839s Jun 21 21:13:56.107: INFO: Pod "var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.454560638s [1mSTEP[0m: Saw pod success Jun 21 21:13:56.107: INFO: Pod "var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891" satisfied condition "Succeeded or Failed" Jun 21 21:13:56.216: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:13:56.506: INFO: Waiting for pod var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891 to disappear Jun 21 21:13:56.606: INFO: Pod var-expansion-3b6c7bd2-f3ff-4451-93a4-9d11eb226891 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.979 seconds][0m [sig-node] Variable Expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should allow composing env vars into new env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":320,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:5.615 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should support retrieving logs from the container over websockets [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":251,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 10 lines ... Jun 21 21:13:57.538: INFO: The status of Pod pod-update-activedeadlineseconds-4f7e83a9-11f5-4003-af6c-b759e247a691 is Pending, waiting for it to be Running (with Ready = true) Jun 21 21:13:59.558: INFO: The status of Pod pod-update-activedeadlineseconds-4f7e83a9-11f5-4003-af6c-b759e247a691 is Running (Ready = true) [1mSTEP[0m: verifying the pod is in kubernetes [1mSTEP[0m: updating the pod Jun 21 21:14:00.526: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4f7e83a9-11f5-4003-af6c-b759e247a691" Jun 21 21:14:00.526: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4f7e83a9-11f5-4003-af6c-b759e247a691" in namespace "pods-3724" to be "terminated due to deadline exceeded" Jun 21 21:14:00.623: INFO: Pod "pod-update-activedeadlineseconds-4f7e83a9-11f5-4003-af6c-b759e247a691": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 97.181357ms Jun 21 21:14:00.623: INFO: Pod "pod-update-activedeadlineseconds-4f7e83a9-11f5-4003-af6c-b759e247a691" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:00.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-3724" for this suite. [32m• [SLOW TEST:6.110 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":99,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:00.822: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 24 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Jun 21 21:13:57.431: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-78449ea1-7c92-48f0-a31f-75df9ea07d2e" in namespace "security-context-test-439" to be "Succeeded or Failed" Jun 21 21:13:57.531: INFO: Pod "alpine-nnp-true-78449ea1-7c92-48f0-a31f-75df9ea07d2e": Phase="Pending", Reason="", readiness=false. Elapsed: 100.004887ms Jun 21 21:13:59.629: INFO: Pod "alpine-nnp-true-78449ea1-7c92-48f0-a31f-75df9ea07d2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198284493s Jun 21 21:14:01.754: INFO: Pod "alpine-nnp-true-78449ea1-7c92-48f0-a31f-75df9ea07d2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.322968317s Jun 21 21:14:01.754: INFO: Pod "alpine-nnp-true-78449ea1-7c92-48f0-a31f-75df9ea07d2e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:02.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-439" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when creating containers with AllowPrivilegeEscalation [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296[0m should allow privilege escalation when true [LinuxOnly] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":36,"skipped":324,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:02.380: INFO: Only supported for providers [gce gke] (not aws) ... skipping 69 lines ... [32m• [SLOW TEST:9.057 seconds][0m [sig-apps] Job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should adopt matching orphans and release non-matching pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":34,"skipped":316,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:02.425: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 97 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:02.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-1055" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":16,"skipped":103,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:02.557: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 97 lines ... Jun 21 21:14:00.105: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Jun 21 21:14:00.105: INFO: stdout: "scheduler controller-manager etcd-1 etcd-0" [1mSTEP[0m: getting details of componentstatuses [1mSTEP[0m: getting status of scheduler Jun 21 21:14:00.105: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9979 get componentstatuses scheduler' Jun 21 21:14:00.804: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Jun 21 21:14:00.804: INFO: stdout: "NAME STATUS MESSAGE ERROR\nscheduler Healthy ok \n" [1mSTEP[0m: getting status of controller-manager Jun 21 21:14:00.804: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9979 get componentstatuses controller-manager' Jun 21 21:14:01.498: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Jun 21 21:14:01.498: INFO: stdout: "NAME STATUS MESSAGE ERROR\ncontroller-manager Healthy ok \n" [1mSTEP[0m: getting status of etcd-1 Jun 21 21:14:01.498: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9979 get componentstatuses etcd-1' Jun 21 21:14:02.305: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Jun 21 21:14:02.305: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-1 Healthy {\"health\":\"true\",\"reason\":\"\"} \n" [1mSTEP[0m: getting status of etcd-0 Jun 21 21:14:02.305: INFO: Running '/logs/artifacts/aab96967-f19d-11ec-8dfe-daa417708791/kubectl --server=https://api.e2e-143745cea3-c83fe.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9979 get componentstatuses etcd-0' Jun 21 21:14:02.988: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Jun 21 21:14:02.988: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-0 Healthy {\"health\":\"true\",\"reason\":\"\"} \n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-9979" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":32,"skipped":252,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:03.199: INFO: Only supported for providers [openstack] (not aws) ... skipping 76 lines ... [32m• [SLOW TEST:12.038 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m listing validating webhooks should work [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":27,"skipped":226,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:03.464: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 29 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:03.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-259" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":35,"skipped":330,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:03.691: INFO: Only supported for providers [gce gke] (not aws) ... skipping 78 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:04.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-9401" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":28,"skipped":241,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":239,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:03:52.135: INFO: >>> kubeConfig: /root/.kube/config ... skipping 50 lines ... Jun 21 21:03:57.543: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumelimits-3556 [1mSTEP[0m: Checking csinode limits Jun 21 21:03:57.742: INFO: CSINodeInfo does not have driver csi-hostpath yet Jun 21 21:03:59.844: INFO: CSINodeInfo does not have driver csi-hostpath yet [1mSTEP[0m: deleting the test namespace: volumelimits-3556 [1mSTEP[0m: Waiting for namespaces [volumelimits-3556] to vanish Jun 21 21:09:02.329: INFO: error deleting namespace volumelimits-3556: timed out waiting for the condition [1mSTEP[0m: uninstalling csi csi-hostpath driver Jun 21 21:09:02.329: INFO: deleting *v1.ServiceAccount: volumelimits-3556-6374/csi-attacher Jun 21 21:09:02.458: INFO: deleting *v1.ClusterRole: external-attacher-runner-volumelimits-3556 Jun 21 21:09:02.557: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volumelimits-3556 Jun 21 21:09:02.656: INFO: deleting *v1.Role: volumelimits-3556-6374/external-attacher-cfg-volumelimits-3556 Jun 21 21:09:02.754: INFO: deleting *v1.RoleBinding: volumelimits-3556-6374/csi-attacher-role-cfg ... skipping 30 lines ... Jun 21 21:09:06.755: INFO: deleting *v1.RoleBinding: volumelimits-3556-6374/csi-hostpathplugin-resizer-role Jun 21 21:09:06.857: INFO: deleting *v1.RoleBinding: volumelimits-3556-6374/csi-hostpathplugin-snapshotter-role Jun 21 21:09:06.983: INFO: deleting *v1.StatefulSet: volumelimits-3556-6374/csi-hostpathplugin Jun 21 21:09:07.091: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumelimits-3556 [1mSTEP[0m: deleting the driver namespace: volumelimits-3556-6374 [1mSTEP[0m: Waiting for namespaces [volumelimits-3556-6374] to vanish Jun 21 21:14:07.653: INFO: error deleting namespace volumelimits-3556-6374: timed out waiting for the condition [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:07.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumelimits-3556" for this suite. [1mSTEP[0m: Destroying namespace "volumelimits-3556-6374" for this suite. ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should verify that all csinodes have volume limits [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:247[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":33,"skipped":239,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:07.999: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern ... skipping 14 lines ... [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:249 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":23,"skipped":211,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:13:31.859: INFO: >>> kubeConfig: /root/.kube/config ... skipping 4 lines ... Jun 21 21:13:32.541: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 21 21:13:33.222: INFO: Successfully created a new PD: "aws://eu-west-2a/vol-0a5161aec0e339943". Jun 21 21:13:33.222: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-vwtv [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 21:13:33.418: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-vwtv" in namespace "volume-1532" to be "Succeeded or Failed" Jun 21 21:13:33.524: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 106.632929ms Jun 21 21:13:35.646: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22876674s Jun 21 21:13:37.798: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380601271s Jun 21 21:13:39.910: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492647308s Jun 21 21:13:42.009: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.591047682s Jun 21 21:13:44.167: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.749022255s Jun 21 21:13:46.264: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.846851829s Jun 21 21:13:48.364: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.946384417s Jun 21 21:13:50.462: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Pending", Reason="", readiness=false. Elapsed: 17.04416334s Jun 21 21:13:52.585: INFO: Pod "exec-volume-test-inlinevolume-vwtv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.167543536s [1mSTEP[0m: Saw pod success Jun 21 21:13:52.585: INFO: Pod "exec-volume-test-inlinevolume-vwtv" satisfied condition "Succeeded or Failed" Jun 21 21:13:52.717: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod exec-volume-test-inlinevolume-vwtv container exec-container-inlinevolume-vwtv: <nil> [1mSTEP[0m: delete the pod Jun 21 21:13:53.014: INFO: Waiting for pod exec-volume-test-inlinevolume-vwtv to disappear Jun 21 21:13:53.137: INFO: Pod exec-volume-test-inlinevolume-vwtv no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-vwtv Jun 21 21:13:53.137: INFO: Deleting pod "exec-volume-test-inlinevolume-vwtv" in namespace "volume-1532" Jun 21 21:13:53.405: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0a5161aec0e339943", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a5161aec0e339943 is currently attached to i-0a740318a9456a046 status code: 400, request id: 42820bec-dea0-4b87-b6a4-4b705d3c4585 Jun 21 21:13:58.910: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0a5161aec0e339943", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a5161aec0e339943 is currently attached to i-0a740318a9456a046 status code: 400, request id: c515e903-9d47-447f-8069-490649a7c731 Jun 21 21:14:04.419: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0a5161aec0e339943", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a5161aec0e339943 is currently attached to i-0a740318a9456a046 status code: 400, request id: 55a6169c-84d5-4ae6-b23e-bf14f44136c6 Jun 21 21:14:10.002: INFO: Successfully deleted PD "aws://eu-west-2a/vol-0a5161aec0e339943". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:10.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-1532" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":24,"skipped":211,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:10.211: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 136 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 21:14:10.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369" in namespace "downward-api-3100" to be "Succeeded or Failed" Jun 21 21:14:10.963: INFO: Pod "downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369": Phase="Pending", Reason="", readiness=false. Elapsed: 99.937226ms Jun 21 21:14:13.061: INFO: Pod "downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197247162s Jun 21 21:14:15.159: INFO: Pod "downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.295287401s [1mSTEP[0m: Saw pod success Jun 21 21:14:15.159: INFO: Pod "downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369" satisfied condition "Succeeded or Failed" Jun 21 21:14:15.256: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:15.466: INFO: Waiting for pod downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369 to disappear Jun 21 21:14:15.563: INFO: Pod downwardapi-volume-f1ffe9dd-f239-4885-bce0-cf713f5d0369 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.504 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide container's cpu limit [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":251,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:15.764: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping ... skipping 50 lines ... Jun 21 21:14:12.057: INFO: PersistentVolumeClaim pvc-g6kqx found but phase is Pending instead of Bound. Jun 21 21:14:14.155: INFO: PersistentVolumeClaim pvc-g6kqx found and phase=Bound (4.344750898s) Jun 21 21:14:14.155: INFO: Waiting up to 3m0s for PersistentVolume local-s6czt to have phase Bound Jun 21 21:14:14.252: INFO: PersistentVolume local-s6czt found and phase=Bound (96.937976ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-8qfd [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 21:14:14.658: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8qfd" in namespace "volume-9644" to be "Succeeded or Failed" Jun 21 21:14:14.761: INFO: Pod "exec-volume-test-preprovisionedpv-8qfd": Phase="Pending", Reason="", readiness=false. Elapsed: 102.990068ms Jun 21 21:14:16.958: INFO: Pod "exec-volume-test-preprovisionedpv-8qfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.299749187s [1mSTEP[0m: Saw pod success Jun 21 21:14:16.958: INFO: Pod "exec-volume-test-preprovisionedpv-8qfd" satisfied condition "Succeeded or Failed" Jun 21 21:14:17.077: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-8qfd container exec-container-preprovisionedpv-8qfd: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:17.421: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8qfd to disappear Jun 21 21:14:17.526: INFO: Pod exec-volume-test-preprovisionedpv-8qfd no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-8qfd Jun 21 21:14:17.526: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8qfd" in namespace "volume-9644" ... skipping 32 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":36,"skipped":338,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 20 lines ... Jun 21 21:14:12.779: INFO: PersistentVolumeClaim pvc-hbvj8 found but phase is Pending instead of Bound. Jun 21 21:14:14.876: INFO: PersistentVolumeClaim pvc-hbvj8 found and phase=Bound (8.501466716s) Jun 21 21:14:14.876: INFO: Waiting up to 3m0s for PersistentVolume local-bms4t to have phase Bound Jun 21 21:14:14.972: INFO: PersistentVolume local-bms4t found and phase=Bound (96.509737ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-jb8j [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 21 21:14:15.265: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-jb8j" in namespace "volume-6015" to be "Succeeded or Failed" Jun 21 21:14:15.362: INFO: Pod "exec-volume-test-preprovisionedpv-jb8j": Phase="Pending", Reason="", readiness=false. Elapsed: 96.750424ms Jun 21 21:14:17.469: INFO: Pod "exec-volume-test-preprovisionedpv-jb8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203771515s Jun 21 21:14:19.584: INFO: Pod "exec-volume-test-preprovisionedpv-jb8j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319508665s [1mSTEP[0m: Saw pod success Jun 21 21:14:19.584: INFO: Pod "exec-volume-test-preprovisionedpv-jb8j" satisfied condition "Succeeded or Failed" Jun 21 21:14:19.681: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-jb8j container exec-container-preprovisionedpv-jb8j: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:19.884: INFO: Waiting for pod exec-volume-test-preprovisionedpv-jb8j to disappear Jun 21 21:14:19.981: INFO: Pod exec-volume-test-preprovisionedpv-jb8j no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-jb8j Jun 21 21:14:19.981: INFO: Deleting pod "exec-volume-test-preprovisionedpv-jb8j" in namespace "volume-6015" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":17,"skipped":123,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:21.368: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 29 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:21.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-4823" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":18,"skipped":124,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:22.153: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 55 lines ... [32m• [SLOW TEST:15.068 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to change the type from ExternalName to ClusterIP [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":34,"skipped":244,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:22.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "node-lease-test-4013" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] NodeLease NodeLease should have OwnerReferences set","total":-1,"completed":19,"skipped":126,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:23.104: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 89 lines ... [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-95321108-e8ba-4cc9-9c2c-a261e1679dbb [1mSTEP[0m: Creating a pod to test consume configMaps Jun 21 21:14:21.873: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea" in namespace "projected-3287" to be "Succeeded or Failed" Jun 21 21:14:21.970: INFO: Pod "pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea": Phase="Pending", Reason="", readiness=false. Elapsed: 96.201718ms Jun 21 21:14:24.071: INFO: Pod "pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197730284s Jun 21 21:14:26.169: INFO: Pod "pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.295793422s [1mSTEP[0m: Saw pod success Jun 21 21:14:26.169: INFO: Pod "pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea" satisfied condition "Succeeded or Failed" Jun 21 21:14:26.268: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:26.469: INFO: Waiting for pod pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea to disappear Jun 21 21:14:26.574: INFO: Pod pod-projected-configmaps-c498c6b6-7423-41ae-8b75-22ca48bddaea no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.769 seconds][0m [sig-storage] Projected configMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":347,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:26.963: INFO: Only supported for providers [vsphere] (not aws) ... skipping 139 lines ... Jun 21 21:04:18.065: INFO: Waiting up to 5m0s for PersistentVolume pvc-742e1946-ff8d-4b0e-bf44-7c7bab36ac5d to get deleted Jun 21 21:04:18.170: INFO: PersistentVolume pvc-742e1946-ff8d-4b0e-bf44-7c7bab36ac5d found and phase=Released (104.913744ms) Jun 21 21:04:23.268: INFO: PersistentVolume pvc-742e1946-ff8d-4b0e-bf44-7c7bab36ac5d was removed [1mSTEP[0m: Deleting sc [1mSTEP[0m: deleting the test namespace: volume-expand-5089 [1mSTEP[0m: Waiting for namespaces [volume-expand-5089] to vanish Jun 21 21:09:23.828: INFO: error deleting namespace volume-expand-5089: timed out waiting for the condition [1mSTEP[0m: uninstalling csi csi-hostpath driver Jun 21 21:09:23.828: INFO: deleting *v1.ServiceAccount: volume-expand-5089-3933/csi-attacher Jun 21 21:09:23.952: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-5089 Jun 21 21:09:24.061: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-5089 Jun 21 21:09:24.158: INFO: deleting *v1.Role: volume-expand-5089-3933/external-attacher-cfg-volume-expand-5089 Jun 21 21:09:24.258: INFO: deleting *v1.RoleBinding: volume-expand-5089-3933/csi-attacher-role-cfg ... skipping 30 lines ... Jun 21 21:09:27.652: INFO: deleting *v1.RoleBinding: volume-expand-5089-3933/csi-hostpathplugin-resizer-role Jun 21 21:09:27.790: INFO: deleting *v1.RoleBinding: volume-expand-5089-3933/csi-hostpathplugin-snapshotter-role Jun 21 21:09:27.899: INFO: deleting *v1.StatefulSet: volume-expand-5089-3933/csi-hostpathplugin Jun 21 21:09:28.026: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-5089 [1mSTEP[0m: deleting the driver namespace: volume-expand-5089-3933 [1mSTEP[0m: Waiting for namespaces [volume-expand-5089-3933] to vanish Jun 21 21:14:28.773: INFO: error deleting namespace volume-expand-5089-3933: timed out waiting for the condition [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:28.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-expand-5089" for this suite. [1mSTEP[0m: Destroying namespace "volume-expand-5089-3933" for this suite. ... skipping 5 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should resize volume when PVC is edited while pod is using it [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":39,"skipped":313,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 4 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating pod pod-subpath-test-configmap-55rh [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 21:14:05.201: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-55rh" in namespace "subpath-5713" to be "Succeeded or Failed" Jun 21 21:14:05.313: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Pending", Reason="", readiness=false. Elapsed: 112.756756ms Jun 21 21:14:07.412: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 2.211537032s Jun 21 21:14:09.515: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 4.314125934s Jun 21 21:14:11.613: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 6.412684119s Jun 21 21:14:13.715: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 8.514148372s Jun 21 21:14:15.813: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 10.611897796s Jun 21 21:14:17.914: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 12.713193734s Jun 21 21:14:20.012: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 14.811363789s Jun 21 21:14:22.138: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 16.937390021s Jun 21 21:14:24.241: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 19.040389204s Jun 21 21:14:26.352: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Running", Reason="", readiness=true. Elapsed: 21.151373626s Jun 21 21:14:28.455: INFO: Pod "pod-subpath-test-configmap-55rh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.253938686s [1mSTEP[0m: Saw pod success Jun 21 21:14:28.455: INFO: Pod "pod-subpath-test-configmap-55rh" satisfied condition "Succeeded or Failed" Jun 21 21:14:28.565: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-subpath-test-configmap-55rh container test-container-subpath-configmap-55rh: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:28.891: INFO: Waiting for pod pod-subpath-test-configmap-55rh to disappear Jun 21 21:14:28.988: INFO: Pod pod-subpath-test-configmap-55rh no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-configmap-55rh Jun 21 21:14:28.988: INFO: Deleting pod "pod-subpath-test-configmap-55rh" in namespace "subpath-5713" ... skipping 8 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":29,"skipped":242,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 16 lines ... [32m• [SLOW TEST:95.077 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of same group but different versions [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":33,"skipped":241,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:12:48.831: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename cronjob [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 12 lines ... [32m• [SLOW TEST:105.561 seconds][0m [sig-apps] CronJob [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should not emit unexpected warnings [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:216[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":34,"skipped":241,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:34.400: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 178 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m when running a container with a new image [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266[0m should be able to pull from private registry with secret [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":30,"skipped":244,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:35.034: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:35.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-4661" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":251,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:35.962: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 93 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99[0m should perform canary updates and phased rolling updates of template modifications [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":11,"skipped":44,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:37.472: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 171 lines ... [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":37,"skipped":270,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:14:31.986: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename hostpath [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 [1mSTEP[0m: Creating a pod to test hostPath mode Jun 21 21:14:32.628: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-686" to be "Succeeded or Failed" Jun 21 21:14:32.725: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 97.239512ms Jun 21 21:14:34.823: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195308243s Jun 21 21:14:36.923: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.295424274s [1mSTEP[0m: Saw pod success Jun 21 21:14:36.923: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 21 21:14:37.022: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-host-path-test container test-container-1: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:37.302: INFO: Waiting for pod pod-host-path-test to disappear Jun 21 21:14:37.402: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.616 seconds][0m [sig-storage] HostPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should give a volume the correct mode [LinuxOnly] [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":38,"skipped":270,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:38.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-1266" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":12,"skipped":63,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:38.856: INFO: Only supported for providers [gce gke] (not aws) ... skipping 23 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 21 21:14:38.203: INFO: Waiting up to 5m0s for pod "metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53" in namespace "projected-3196" to be "Succeeded or Failed" Jun 21 21:14:38.321: INFO: Pod "metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53": Phase="Pending", Reason="", readiness=false. Elapsed: 118.439888ms Jun 21 21:14:40.431: INFO: Pod "metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227700442s Jun 21 21:14:42.581: INFO: Pod "metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.377960406s [1mSTEP[0m: Saw pod success Jun 21 21:14:42.581: INFO: Pod "metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53" satisfied condition "Succeeded or Failed" Jun 21 21:14:42.688: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:42.920: INFO: Waiting for pod metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53 to disappear Jun 21 21:14:43.018: INFO: Pod metadata-volume-6576c3f3-b0df-4abd-9a16-afba0698cb53 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.607 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":39,"skipped":274,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:43.218: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 48 lines ... [32m• [SLOW TEST:11.133 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate custom resource [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":35,"skipped":267,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:45.570: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 207 lines ... [32m• [SLOW TEST:29.556 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to create a functioning NodePort service [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":20,"skipped":137,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:52.673: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 11 lines ... [36mDriver "local" does not provide raw block - skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":36,"skipped":277,"failed":0} [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:14:48.636: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: creating secret secrets-6892/secret-test-711bad7a-058a-4d30-8e27-c7f5c5b76a43 [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 21:14:49.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5" in namespace "secrets-6892" to be "Succeeded or Failed" Jun 21 21:14:49.464: INFO: Pod "pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5": Phase="Pending", Reason="", readiness=false. Elapsed: 101.040686ms Jun 21 21:14:51.562: INFO: Pod "pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19865733s Jun 21 21:14:53.660: INFO: Pod "pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297269528s [1mSTEP[0m: Saw pod success Jun 21 21:14:53.661: INFO: Pod "pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5" satisfied condition "Succeeded or Failed" Jun 21 21:14:53.759: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5 container env-test: <nil> [1mSTEP[0m: delete the pod Jun 21 21:14:53.971: INFO: Waiting for pod pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5 to disappear Jun 21 21:14:54.068: INFO: Pod pod-configmaps-9845fae9-b8d7-4c83-ab12-908753c524b5 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.749 seconds][0m [sig-node] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be consumable via the environment [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":277,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:54.390: INFO: Only supported for providers [gce gke] (not aws) ... skipping 34 lines ... [32m• [SLOW TEST:60.937 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":105,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:56.413: INFO: Only supported for providers [openstack] (not aws) ... skipping 70 lines ... Jun 21 21:04:34.134: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-llfrp] to have phase Bound Jun 21 21:04:34.240: INFO: PersistentVolumeClaim pvc-llfrp found and phase=Bound (105.233411ms) [1mSTEP[0m: Deleting the previously created pod Jun 21 21:04:44.729: INFO: Deleting pod "pvc-volume-tester-f7kbm" in namespace "csi-mock-volumes-5098" Jun 21 21:04:44.828: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f7kbm" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Jun 21 21:04:49.324: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/31e2398b-bfec-49f5-a3c2-7b136176ce40/volumes/kubernetes.io~csi/pvc-8bc58e70-defc-4781-a91c-eccf74deaf08/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-f7kbm Jun 21 21:04:49.324: INFO: Deleting pod "pvc-volume-tester-f7kbm" in namespace "csi-mock-volumes-5098" [1mSTEP[0m: Deleting claim pvc-llfrp Jun 21 21:04:49.657: INFO: Waiting up to 2m0s for PersistentVolume pvc-8bc58e70-defc-4781-a91c-eccf74deaf08 to get deleted Jun 21 21:04:49.759: INFO: PersistentVolume pvc-8bc58e70-defc-4781-a91c-eccf74deaf08 found and phase=Released (102.07937ms) Jun 21 21:04:51.870: INFO: PersistentVolume pvc-8bc58e70-defc-4781-a91c-eccf74deaf08 found and phase=Released (2.21248283s) Jun 21 21:04:53.967: INFO: PersistentVolume pvc-8bc58e70-defc-4781-a91c-eccf74deaf08 was removed [1mSTEP[0m: Deleting storageclass csi-mock-volumes-5098-sc7b8d9 [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-5098 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-5098] to vanish Jun 21 21:09:54.625: INFO: error deleting namespace csi-mock-volumes-5098: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:09:54.625: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5098-8052/csi-attacher Jun 21 21:09:54.732: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5098 Jun 21 21:09:54.831: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5098 Jun 21 21:09:54.946: INFO: deleting *v1.Role: csi-mock-volumes-5098-8052/external-attacher-cfg-csi-mock-volumes-5098 Jun 21 21:09:55.052: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5098-8052/csi-attacher-role-cfg ... skipping 22 lines ... Jun 21 21:09:57.548: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5098 Jun 21 21:09:57.695: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5098-8052/csi-mockplugin Jun 21 21:09:57.797: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5098 Jun 21 21:09:57.932: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5098-8052/csi-mockplugin-attacher [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-5098-8052 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-5098-8052] to vanish Jun 21 21:14:58.554: INFO: error deleting namespace csi-mock-volumes-5098-8052: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:14:58.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-5098" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-5098-8052" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI workload information using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:469[0m should not be passed when podInfoOnMount=false [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:519[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":19,"skipped":192,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:14:58.956: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 137 lines ... [32m• [SLOW TEST:6.468 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate configmap [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":12,"skipped":118,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:15:02.893: INFO: Driver "csi-hostpath" does not support topology - skipping ... skipping 73 lines ... Jun 21 21:14:57.779: INFO: PersistentVolumeClaim pvc-jx5dq found but phase is Pending instead of Bound. Jun 21 21:14:59.879: INFO: PersistentVolumeClaim pvc-jx5dq found and phase=Bound (12.690416985s) Jun 21 21:14:59.879: INFO: Waiting up to 3m0s for PersistentVolume local-f4gvd to have phase Bound Jun 21 21:14:59.978: INFO: PersistentVolume local-f4gvd found and phase=Bound (98.668576ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-l9b5 [1mSTEP[0m: Creating a pod to test subpath Jun 21 21:15:00.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l9b5" in namespace "provisioning-3257" to be "Succeeded or Failed" Jun 21 21:15:00.397: INFO: Pod "pod-subpath-test-preprovisionedpv-l9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 113.095093ms Jun 21 21:15:02.515: INFO: Pod "pod-subpath-test-preprovisionedpv-l9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231416356s Jun 21 21:15:04.626: INFO: Pod "pod-subpath-test-preprovisionedpv-l9b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.342850602s [1mSTEP[0m: Saw pod success Jun 21 21:15:04.627: INFO: Pod "pod-subpath-test-preprovisionedpv-l9b5" satisfied condition "Succeeded or Failed" Jun 21 21:15:04.723: INFO: Trying to get logs from node ip-172-20-0-54.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-l9b5 container test-container-volume-preprovisionedpv-l9b5: <nil> [1mSTEP[0m: delete the pod Jun 21 21:15:04.961: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l9b5 to disappear Jun 21 21:15:05.061: INFO: Pod pod-subpath-test-preprovisionedpv-l9b5 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-l9b5 Jun 21 21:15:05.061: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l9b5" in namespace "provisioning-3257" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":40,"skipped":276,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]"]} [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:15:06.540: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 166 lines ... [36mDriver local doesn't support InlineVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":37,"skipped":331,"failed":0} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:15:04.813: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-bb64b7bc-e369-4029-974c-da136d7aeaeb [1mSTEP[0m: Creating a pod to test consume secrets Jun 21 21:15:05.547: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9104f595-1768-4d96-9b93-c7fdb0c4c987" in namespace "projected-4284" to be "Succeeded or Failed" Jun 21 21:15:05.644: INFO: Pod "pod-projected-secrets-9104f595-1768-4d96-9b93-c7fdb0c4c987": Phase="Pending", Reason="", readiness=false. Elapsed: 97.603704ms Jun 21 21:15:07.765: INFO: Pod "pod-projected-secrets-9104f595-1768-4d96-9b93-c7fdb0c4c987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218495686s [1mSTEP[0m: Saw pod success Jun 21 21:15:07.765: INFO: Pod "pod-projected-secrets-9104f595-1768-4d96-9b93-c7fdb0c4c987" satisfied condition "Succeeded or Failed" Jun 21 21:15:07.895: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-projected-secrets-9104f595-1768-4d96-9b93-c7fdb0c4c987 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 21 21:15:08.286: INFO: Waiting for pod pod-projected-secrets-9104f595-1768-4d96-9b93-c7fdb0c4c987 to disappear Jun 21 21:15:08.440: INFO: Pod pod-projected-secrets-9104f595-1768-4d96-9b93-c7fdb0c4c987 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 81 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":20,"skipped":206,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 18 lines ... Jun 21 21:14:41.936: INFO: PersistentVolumeClaim pvc-h847h found but phase is Pending instead of Bound. Jun 21 21:14:44.037: INFO: PersistentVolumeClaim pvc-h847h found and phase=Bound (4.303520566s) Jun 21 21:14:44.037: INFO: Waiting up to 3m0s for PersistentVolume local-7ttcz to have phase Bound Jun 21 21:14:44.154: INFO: PersistentVolume local-7ttcz found and phase=Bound (117.351665ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-hbdb [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 21:14:44.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hbdb" in namespace "provisioning-6341" to be "Succeeded or Failed" Jun 21 21:14:44.657: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Pending", Reason="", readiness=false. Elapsed: 162.730581ms Jun 21 21:14:46.774: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279217798s Jun 21 21:14:48.877: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381939966s Jun 21 21:14:50.975: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Running", Reason="", readiness=true. Elapsed: 6.480505894s Jun 21 21:14:53.075: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Running", Reason="", readiness=true. Elapsed: 8.580680262s Jun 21 21:14:55.174: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Running", Reason="", readiness=true. Elapsed: 10.679784035s ... skipping 2 lines ... Jun 21 21:15:01.469: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Running", Reason="", readiness=true. Elapsed: 16.974642385s Jun 21 21:15:03.568: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Running", Reason="", readiness=true. Elapsed: 19.073175355s Jun 21 21:15:05.666: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Running", Reason="", readiness=true. Elapsed: 21.170884421s Jun 21 21:15:07.767: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Running", Reason="", readiness=true. Elapsed: 23.272653097s Jun 21 21:15:09.868: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.373078439s [1mSTEP[0m: Saw pod success Jun 21 21:15:09.868: INFO: Pod "pod-subpath-test-preprovisionedpv-hbdb" satisfied condition "Succeeded or Failed" Jun 21 21:15:09.965: INFO: Trying to get logs from node ip-172-20-0-246.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-hbdb container test-container-subpath-preprovisionedpv-hbdb: <nil> [1mSTEP[0m: delete the pod Jun 21 21:15:10.184: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hbdb to disappear Jun 21 21:15:10.282: INFO: Pod pod-subpath-test-preprovisionedpv-hbdb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-hbdb Jun 21 21:15:10.282: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hbdb" in namespace "provisioning-6341" ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":32,"skipped":260,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:15:11.725: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 14 lines ... [36mDriver emptydir doesn't support PreprovisionedPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":331,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:15:08.744: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:49 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:66 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 21 21:15:09.366: INFO: Waiting up to 5m0s for pod "pod-06ce86ac-60b8-47ff-bac8-f785694c44cd" in namespace "emptydir-3774" to be "Succeeded or Failed" Jun 21 21:15:09.463: INFO: Pod "pod-06ce86ac-60b8-47ff-bac8-f785694c44cd": Phase="Pending", Reason="", readiness=false. Elapsed: 97.510448ms Jun 21 21:15:11.564: INFO: Pod "pod-06ce86ac-60b8-47ff-bac8-f785694c44cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198521402s Jun 21 21:15:13.663: INFO: Pod "pod-06ce86ac-60b8-47ff-bac8-f785694c44cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297559349s [1mSTEP[0m: Saw pod success Jun 21 21:15:13.663: INFO: Pod "pod-06ce86ac-60b8-47ff-bac8-f785694c44cd" satisfied condition "Succeeded or Failed" Jun 21 21:15:13.761: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod pod-06ce86ac-60b8-47ff-bac8-f785694c44cd container test-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:15:13.987: INFO: Waiting for pod pod-06ce86ac-60b8-47ff-bac8-f785694c44cd to disappear Jun 21 21:15:14.084: INFO: Pod pod-06ce86ac-60b8-47ff-bac8-f785694c44cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:47[0m files with FSGroup ownership should support (root,0644,tmpfs) [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:66[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":39,"skipped":331,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:15:14.298: INFO: Only supported for providers [azure] (not aws) ... skipping 33 lines ... Jun 21 21:14:27.499: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-52459cvqh 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-5245 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-52459cvqh,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-5245 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-52459cvqh,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},} [1mSTEP[0m: Creating a StorageClass [1mSTEP[0m: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-5245 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-52459cvqh,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},} [1mSTEP[0m: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-52459cvqh 89188a69-3f27-4495-b046-c2f5ec861707 65103 0 2022-06-21 21:14:27 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update storage.k8s.io/v1 2022-06-21 21:14:27 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-s97zb pvc- provisioning-5245 21dd6238-5882-4289-b03d-bad419d172e8 65110 0 2022-06-21 21:14:27 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection] [{e2e.test Update v1 2022-06-21 21:14:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-52459cvqh,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},} [1mSTEP[0m: Deleting pod pod-496ddc6b-3b1e-48e8-ab2f-1bf814eae567 in namespace provisioning-5245 [1mSTEP[0m: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil} Jun 21 21:14:52.639: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-stl8v" in namespace "provisioning-5245" to be "Succeeded or Failed" Jun 21 21:14:52.736: INFO: Pod "pvc-volume-tester-writer-stl8v": Phase="Pending", Reason="", readiness=false. Elapsed: 96.518998ms Jun 21 21:14:54.833: INFO: Pod "pvc-volume-tester-writer-stl8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194190961s Jun 21 21:14:56.939: INFO: Pod "pvc-volume-tester-writer-stl8v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300399586s Jun 21 21:14:59.039: INFO: Pod "pvc-volume-tester-writer-stl8v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.399486859s [1mSTEP[0m: Saw pod success Jun 21 21:14:59.039: INFO: Pod "pvc-volume-tester-writer-stl8v" satisfied condition "Succeeded or Failed" Jun 21 21:14:59.254: INFO: Pod pvc-volume-tester-writer-stl8v has the following logs: Jun 21 21:14:59.254: INFO: Deleting pod "pvc-volume-tester-writer-stl8v" in namespace "provisioning-5245" Jun 21 21:14:59.366: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-stl8v" to be fully deleted [1mSTEP[0m: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-0-148.eu-west-2.compute.internal" Jun 21 21:14:59.758: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-8rnkz" in namespace "provisioning-5245" to be "Succeeded or Failed" Jun 21 21:14:59.860: INFO: Pod "pvc-volume-tester-reader-8rnkz": Phase="Pending", Reason="", readiness=false. Elapsed: 101.6708ms Jun 21 21:15:02.016: INFO: Pod "pvc-volume-tester-reader-8rnkz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258201597s Jun 21 21:15:04.136: INFO: Pod "pvc-volume-tester-reader-8rnkz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377393566s Jun 21 21:15:06.233: INFO: Pod "pvc-volume-tester-reader-8rnkz": Phase="Running", Reason="", readiness=true. Elapsed: 6.475238598s Jun 21 21:15:08.413: INFO: Pod "pvc-volume-tester-reader-8rnkz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.655092693s [1mSTEP[0m: Saw pod success Jun 21 21:15:08.413: INFO: Pod "pvc-volume-tester-reader-8rnkz" satisfied condition "Succeeded or Failed" Jun 21 21:15:08.715: INFO: Pod pvc-volume-tester-reader-8rnkz has the following logs: hello world Jun 21 21:15:08.716: INFO: Deleting pod "pvc-volume-tester-reader-8rnkz" in namespace "provisioning-5245" Jun 21 21:15:08.827: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-8rnkz" to be fully deleted Jun 21 21:15:08.950: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-s97zb] to have phase Bound Jun 21 21:15:09.049: INFO: PersistentVolumeClaim pvc-s97zb found and phase=Bound (99.038882ms) ... skipping 21 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision storage with mount options [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":38,"skipped":365,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:15:25.129: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 59 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating pod pod-subpath-test-configmap-s6b9 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 21 21:15:03.675: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s6b9" in namespace "subpath-9468" to be "Succeeded or Failed" Jun 21 21:15:03.771: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 95.78152ms Jun 21 21:15:05.868: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193217218s Jun 21 21:15:08.007: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 4.331575333s Jun 21 21:15:10.115: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 6.43993746s Jun 21 21:15:12.213: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 8.537402594s Jun 21 21:15:14.310: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 10.634395059s Jun 21 21:15:16.414: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 12.738685117s Jun 21 21:15:18.563: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 14.888150897s Jun 21 21:15:20.670: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 16.994463901s Jun 21 21:15:22.766: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 19.09062964s Jun 21 21:15:24.862: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Running", Reason="", readiness=true. Elapsed: 21.187073685s Jun 21 21:15:26.965: INFO: Pod "pod-subpath-test-configmap-s6b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.289757983s [1mSTEP[0m: Saw pod success Jun 21 21:15:26.965: INFO: Pod "pod-subpath-test-configmap-s6b9" satisfied condition "Succeeded or Failed" Jun 21 21:15:27.061: INFO: Trying to get logs from node ip-172-20-0-148.eu-west-2.compute.internal pod pod-subpath-test-configmap-s6b9 container test-container-subpath-configmap-s6b9: <nil> [1mSTEP[0m: delete the pod Jun 21 21:15:27.268: INFO: Waiting for pod pod-subpath-test-configmap-s6b9 to disappear Jun 21 21:15:27.364: INFO: Pod pod-subpath-test-configmap-s6b9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-configmap-s6b9 Jun 21 21:15:27.364: INFO: Deleting pod "pod-subpath-test-configmap-s6b9" in namespace "subpath-9468" ... skipping 8 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":13,"skipped":122,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:15:27.704: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 77 lines ... Jun 21 21:05:21.190: INFO: PersistentVolume pvc-20889e01-7caf-46ae-9945-7f02764f0cb1 found and phase=Released (108.149867ms) Jun 21 21:05:23.295: INFO: PersistentVolume pvc-20889e01-7caf-46ae-9945-7f02764f0cb1 was removed [1mSTEP[0m: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-6709 [1mSTEP[0m: Cleaning up resources [1mSTEP[0m: deleting the test namespace: csi-mock-volumes-6709 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-6709] to vanish Jun 21 21:10:24.046: INFO: error deleting namespace csi-mock-volumes-6709: timed out waiting for the condition [1mSTEP[0m: uninstalling csi mock driver Jun 21 21:10:24.046: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6709-5144/csi-attacher Jun 21 21:10:24.171: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6709 Jun 21 21:10:24.284: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6709 Jun 21 21:10:24.407: INFO: deleting *v1.Role: csi-mock-volumes-6709-5144/external-attacher-cfg-csi-mock-volumes-6709 Jun 21 21:10:24.512: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6709-5144/csi-attacher-role-cfg ... skipping 22 lines ... Jun 21 21:10:27.087: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6709 Jun 21 21:10:27.193: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6709-5144/csi-mockplugin Jun 21 21:10:27.291: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6709 Jun 21 21:10:27.397: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6709-5144/csi-mockplugin-attacher [1mSTEP[0m: deleting the driver namespace: csi-mock-volumes-6709-5144 [1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-6709-5144] to vanish Jun 21 21:15:28.093: INFO: error deleting namespace csi-mock-volumes-6709-5144: timed out waiting for the condition [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:15:28.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csi-mock-volumes-6709" for this suite. [1mSTEP[0m: Destroying namespace "csi-mock-volumes-6709-5144" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIStorageCapacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1336[0m CSIStorageCapacity used, have capacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1379[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":19,"skipped":126,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":35,"skipped":245,"failed":0} [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:15:27.480: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename ingress [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 23 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:15:30.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingress-9435" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":36,"skipped":245,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 21 21:15:30.920: INFO: Only supported for providers [azure] (not aws) ... skipping 104 lines ... Jun 21 21:14:59.962: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:00.060: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:00.160: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:00.263: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:00.380: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:00.491: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:00.491: INFO: Lookups using dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local] Jun 21 21:15:05.591: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:05.688: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:05.791: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:05.893: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:05.990: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:06.088: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:06.200: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:06.298: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:06.298: INFO: Lookups using dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local] Jun 21 21:15:10.661: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:10.767: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:10.881: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:10.979: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:11.080: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:11.177: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:11.275: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:11.372: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:11.372: INFO: Lookups using dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local] Jun 21 21:15:15.590: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:15.691: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:15.788: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:15.911: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:16.057: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:16.171: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:16.292: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:16.397: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:16.397: INFO: Lookups using dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local] Jun 21 21:15:20.590: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:20.687: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:20.784: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:20.881: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:20.980: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:21.077: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:21.206: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:21.315: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:21.315: INFO: Lookups using dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local] Jun 21 21:15:25.590: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:25.688: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:25.786: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:25.889: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:25.987: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:26.084: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:26.184: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:26.362: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local from pod dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815: the server could not find the requested resource (get pods dns-test-edf755ab-71d8-4c78-b823-692091a28815) Jun 21 21:15:26.362: INFO: Lookups using dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1451.svc.cluster.local jessie_udp@dns-test-service-2.dns-1451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1451.svc.cluster.local] Jun 21 21:15:31.284: INFO: DNS probes using dns-1451/dns-test-edf755ab-71d8-4c78-b823-692091a28815 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS ... skipping 5 lines ... [32m• [SLOW TEST:39.060 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for pods for Subdomain [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":21,"skipped":140,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 21 21:15:28.793: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test service account token: Jun 21 21:15:29.398: INFO: Waiting up to 5m0s for pod "test-pod-e2cf67af-62e5-4eb9-86da-6ee597b3eef7" in namespace "svcaccounts-4103" to be "Succeeded or Failed" Jun 21 21:15:29.496: INFO: Pod "test-pod-e2cf67af-62e5-4eb9-86da-6ee597b3eef7": Phase="Pending", Reason="", readiness=false. Elapsed: 97.653248ms Jun 21 21:15:31.595: INFO: Pod "test-pod-e2cf67af-62e5-4eb9-86da-6ee597b3eef7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.196258901s [1mSTEP[0m: Saw pod success Jun 21 21:15:31.595: INFO: Pod "test-pod-e2cf67af-62e5-4eb9-86da-6ee597b3eef7" satisfied condition "Succeeded or Failed" Jun 21 21:15:31.694: INFO: Trying to get logs from node ip-172-20-0-5.eu-west-2.compute.internal pod test-pod-e2cf67af-62e5-4eb9-86da-6ee597b3eef7 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 21 21:15:31.908: INFO: Waiting for pod test-pod-e2cf67af-62e5-4eb9-86da-6ee597b3eef7 to disappear Jun 21 21:15:32.015: INFO: Pod test-pod-e2cf67af-62e5-4eb9-86da-6ee597b3eef7 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:15:32.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-4103" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":20,"skipped":129,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 21 21:15:32.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-7699" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":22,"skipped":141,"fa