Recent runs || View in Spyglass
PR | hakman: Fix GCE resource tracking |
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 38m21s |
Revision | 2e78d7b38e033b8f3809d7da4a748b69161c6152 |
Refs |
13859 |
... skipping 402 lines ... Copying file:///home/prow/go/src/k8s.io/kops/.build/upload/latest-ci.txt [Content-Type=text/plain]... / [0 files][ 0.0 B/ 128.0 B] / [1 files][ 128.0 B/ 128.0 B] Operation completed over 1 objects/128.0 B. I0623 09:59:29.238500 5932 copy.go:30] cp /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops /logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kops I0623 09:59:29.395614 5932 up.go:44] Cleaning up any leaked resources from previous cluster I0623 09:59:29.395749 5932 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh970144996/key --ssh-user prow W0623 09:59:29.601429 5932 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0623 09:59:29.601481 5932 down.go:48] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops delete cluster --name e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local --yes I0623 09:59:29.622267 38461 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0623 09:59:29.622361 38461 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local" not found I0623 09:59:29.720899 5932 gcs.go:51] gsutil ls -b -p k8s-boskos-gce-project-15 gs://k8s-boskos-gce-project-15-state-05 I0623 09:59:31.163281 5932 gcs.go:70] gsutil mb -p k8s-boskos-gce-project-15 gs://k8s-boskos-gce-project-15-state-05 Creating gs://k8s-boskos-gce-project-15-state-05/... I0623 09:59:33.185360 5932 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2022/06/23 09:59:33 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I0623 09:59:33.201216 5932 http.go:37] curl https://ip.jsb.workers.dev I0623 09:59:33.309013 5932 up.go:159] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops create cluster --name e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local --cloud gce --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.24.2 --ssh-public-key /tmp/kops-ssh970144996/key.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --channel=alpha --networking=cilium --container-runtime=containerd --gce-service-account=default --admin-access 35.222.20.247/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west3-a --master-size e2-standard-2 --project k8s-boskos-gce-project-15 I0623 09:59:33.330771 38749 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0623 09:59:33.330871 38749 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true I0623 09:59:33.355998 38749 create_cluster.go:862] Using SSH public key: /tmp/kops-ssh970144996/key.pub I0623 09:59:33.605477 38749 new_cluster.go:425] VMs will be configured to use specified Service Account: default ... skipping 375 lines ... I0623 09:59:40.756711 38770 keypair.go:225] Issuing new certificate: "service-account" W0623 09:59:40.757359 38770 vfs_castore.go:379] CA private key was not found I0623 09:59:40.761405 38770 keypair.go:225] Issuing new certificate: "etcd-peers-ca-main" I0623 09:59:40.851451 38770 keypair.go:225] Issuing new certificate: "kubernetes-ca" I0623 09:59:40.860522 38770 keypair.go:225] Issuing new certificate: "etcd-manager-ca-main" I0623 09:59:50.633886 38770 executor.go:111] Tasks: 43 done / 68 total; 19 can run W0623 10:00:04.282026 38770 executor.go:139] error running task "ForwardingRule/api-e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local" (9m46s remaining to succeed): error creating ForwardingRule "api-e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local": googleapi: Error 400: The resource 'projects/k8s-boskos-gce-project-15/regions/us-west3/targetPools/api-e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is not ready, resourceNotReady I0623 10:00:04.282367 38770 executor.go:111] Tasks: 61 done / 68 total; 5 can run I0623 10:00:14.908455 38770 executor.go:111] Tasks: 66 done / 68 total; 2 can run I0623 10:00:35.074662 38770 executor.go:111] Tasks: 68 done / 68 total; 0 can run I0623 10:00:35.164467 38770 update_cluster.go:326] Exporting kubeconfig for cluster kOps has set your kubectl context to e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local ... skipping 8 lines ... I0623 10:00:45.517784 5932 up.go:243] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops validate cluster --name e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local --count 10 --wait 15m0s I0623 10:00:45.539690 38789 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true I0623 10:00:45.539908 38789 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true Validating cluster e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local W0623 10:01:15.875387 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: i/o timeout W0623 10:01:25.898186 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:01:35.923438 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:01:45.948515 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:01:55.971920 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:02:05.993830 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:02:16.018503 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:02:26.042404 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:02:36.066248 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:02:46.092233 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:02:56.117218 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:03:06.140407 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:03:16.162672 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": dial tcp 34.106.25.134:443: connect: connection refused W0623 10:03:36.188165 38789 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://34.106.25.134/api/v1/nodes": net/http: TLS handshake timeout I0623 10:03:47.294090 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 5 lines ... Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/master-us-west3-a-xwk0 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/master-us-west3-a-xwk0" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-djk0 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-djk0" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-j6c5 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-j6c5" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-kn3q machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-kn3q" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-x977 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-x977" has not yet joined cluster Validation Failed W0623 10:03:47.993518 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:03:58.300201 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 6 lines ... Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/master-us-west3-a-xwk0 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/master-us-west3-a-xwk0" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-djk0 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-djk0" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-j6c5 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-j6c5" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-kn3q machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-kn3q" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-x977 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-x977" has not yet joined cluster Validation Failed W0623 10:03:59.082587 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:04:09.483940 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 6 lines ... Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/master-us-west3-a-xwk0 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/master-us-west3-a-xwk0" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-djk0 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-djk0" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-j6c5 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-j6c5" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-kn3q machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-kn3q" has not yet joined cluster Machine https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-x977 machine "https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/nodes-us-west3-a-x977" has not yet joined cluster Validation Failed W0623 10:04:10.131989 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:04:20.583650 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 13 lines ... Pod kube-system/cloud-controller-manager-xdsmj system-cluster-critical pod "cloud-controller-manager-xdsmj" is pending Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/dns-controller-6b785dc767-fp6sv system-cluster-critical pod "dns-controller-6b785dc767-fp6sv" is pending Pod kube-system/kops-controller-gxr6g system-cluster-critical pod "kops-controller-gxr6g" is pending Validation Failed W0623 10:04:21.292987 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:04:31.601206 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 11 lines ... Pod kube-system/cilium-operator-56f498975b-bprrp system-cluster-critical pod "cilium-operator-56f498975b-bprrp" is pending Pod kube-system/cilium-sq9br system-node-critical pod "cilium-sq9br" is pending Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/etcd-manager-main-master-us-west3-a-xwk0 system-cluster-critical pod "etcd-manager-main-master-us-west3-a-xwk0" is pending Validation Failed W0623 10:04:32.349127 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:04:42.644036 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 12 lines ... Pod kube-system/cilium-operator-56f498975b-bprrp system-cluster-critical pod "cilium-operator-56f498975b-bprrp" is pending Pod kube-system/cilium-sq9br system-node-critical pod "cilium-sq9br" is pending Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/metadata-proxy-v0.12-g7l4h system-node-critical pod "metadata-proxy-v0.12-g7l4h" is pending Validation Failed W0623 10:04:43.349184 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:04:53.749735 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 23 lines ... Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/metadata-proxy-v0.12-495k7 system-node-critical pod "metadata-proxy-v0.12-495k7" is pending Pod kube-system/metadata-proxy-v0.12-4xwgv system-node-critical pod "metadata-proxy-v0.12-4xwgv" is pending Pod kube-system/metadata-proxy-v0.12-g7l4h system-node-critical pod "metadata-proxy-v0.12-g7l4h" is pending Pod kube-system/metadata-proxy-v0.12-s26bv system-node-critical pod "metadata-proxy-v0.12-s26bv" is pending Validation Failed W0623 10:04:54.397210 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:05:04.782072 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 24 lines ... Pod kube-system/metadata-proxy-v0.12-495k7 system-node-critical pod "metadata-proxy-v0.12-495k7" is pending Pod kube-system/metadata-proxy-v0.12-4xwgv system-node-critical pod "metadata-proxy-v0.12-4xwgv" is pending Pod kube-system/metadata-proxy-v0.12-g7l4h system-node-critical pod "metadata-proxy-v0.12-g7l4h" is pending Pod kube-system/metadata-proxy-v0.12-lppfq system-node-critical pod "metadata-proxy-v0.12-lppfq" is pending Pod kube-system/metadata-proxy-v0.12-s26bv system-node-critical pod "metadata-proxy-v0.12-s26bv" is pending Validation Failed W0623 10:05:05.457480 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:05:15.864296 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 20 lines ... Pod kube-system/cilium-z2pkv system-node-critical pod "cilium-z2pkv" is not ready (cilium-agent) Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/metadata-proxy-v0.12-g7l4h system-node-critical pod "metadata-proxy-v0.12-g7l4h" is pending Pod kube-system/metadata-proxy-v0.12-lppfq system-node-critical pod "metadata-proxy-v0.12-lppfq" is pending Validation Failed W0623 10:05:16.481002 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:05:26.773862 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 18 lines ... Pod kube-system/cilium-z2pkv system-node-critical pod "cilium-z2pkv" is not ready (cilium-agent) Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/metadata-proxy-v0.12-g7l4h system-node-critical pod "metadata-proxy-v0.12-g7l4h" is pending Pod kube-system/metadata-proxy-v0.12-lppfq system-node-critical pod "metadata-proxy-v0.12-lppfq" is pending Validation Failed W0623 10:05:27.498727 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:05:37.881849 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 17 lines ... Pod kube-system/cilium-z2pkv system-node-critical pod "cilium-z2pkv" is not ready (cilium-agent) Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/metadata-proxy-v0.12-g7l4h system-node-critical pod "metadata-proxy-v0.12-g7l4h" is pending Pod kube-system/metadata-proxy-v0.12-lppfq system-node-critical pod "metadata-proxy-v0.12-lppfq" is pending Validation Failed W0623 10:05:38.565253 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:05:48.877512 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 14 lines ... Pod kube-system/cilium-sq9br system-node-critical pod "cilium-sq9br" is not ready (cilium-agent) Pod kube-system/cilium-z2pkv system-node-critical pod "cilium-z2pkv" is not ready (cilium-agent) Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Pod kube-system/metadata-proxy-v0.12-g7l4h system-node-critical pod "metadata-proxy-v0.12-g7l4h" is pending Validation Failed W0623 10:05:49.658246 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:05:59.997015 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 9 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/cilium-sq9br system-node-critical pod "cilium-sq9br" is not ready (cilium-agent) Pod kube-system/coredns-57d68fdf4b-rgsl7 system-cluster-critical pod "coredns-57d68fdf4b-rgsl7" is pending Pod kube-system/coredns-autoscaler-676759bcc8-jjrhw system-cluster-critical pod "coredns-autoscaler-676759bcc8-jjrhw" is pending Validation Failed W0623 10:06:00.788874 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:06:11.243641 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 7 lines ... nodes-us-west3-a-x977 node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-57d68fdf4b-zl2zj system-cluster-critical pod "coredns-57d68fdf4b-zl2zj" is pending Validation Failed W0623 10:06:11.995018 38789 validate_cluster.go:232] (will retry): cluster not yet healthy I0623 10:06:22.346935 38789 gce_cloud.go:295] Scanning zones: [us-west3-a us-west3-b us-west3-c] INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west3-a Master e2-standard-2 1 1 us-west3 nodes-us-west3-a Node n1-standard-2 4 4 us-west3 ... skipping 183 lines ... =================================== Random Seed: [1m1655978906[0m - Will randomize all specs Will run [1m6971[0m specs Running in parallel across [1m25[0m nodes Jun 23 10:08:45.631: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1 Jun 23 10:08:45.631: INFO: > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource [] Jun 23 10:08:45.631: INFO: > Jun 23 10:08:45.631: INFO: Cluster image sources lookup failed: exit status 1 Jun 23 10:08:45.631: INFO: >>> kubeConfig: /root/.kube/config Jun 23 10:08:45.633: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 23 10:08:45.753: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 23 10:08:45.851: INFO: 22 / 22 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 23 10:08:45.851: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. ... skipping 138 lines ... test/e2e/framework/framework.go:188 Jun 23 10:08:46.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-5464" for this suite. [32m•[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:37 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 14 lines ... test/e2e/framework/framework.go:188 Jun 23 10:08:46.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-14" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 14 lines ... test/e2e/framework/framework.go:188 Jun 23 10:08:46.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-6375" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] Events test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... test/e2e/framework/framework.go:188 Jun 23 10:08:47.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-4817" for this suite. [32m•[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":29,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [32m• [SLOW TEST:7.100 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should be updated [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Kubelet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [90mtest/e2e/common/node/framework.go:23[0m when scheduling a read only busybox container [90mtest/e2e/common/node/kubelet.go:190[0m should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":45,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... test/e2e/framework/framework.go:188 Jun 23 10:08:54.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-5354" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:08:54.947: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap that has name configmap-test-emptyKey-3301286d-9b15-4e28-989d-28183844369e [AfterEach] [sig-node] ConfigMap test/e2e/framework/framework.go:188 Jun 23 10:08:55.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-3038" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 54 lines ... [32m• [SLOW TEST:10.376 seconds][0m [sig-auth] ServiceAccounts [90mtest/e2e/auth/framework.go:23[0m should mount an API token into pods [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... Jun 23 10:08:46.306: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 10:08:46.448: INFO: Waiting up to 5m0s for pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9" in namespace "downward-api-7441" to be "Succeeded or Failed" Jun 23 10:08:46.479: INFO: Pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.819148ms Jun 23 10:08:48.505: INFO: Pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056469201s Jun 23 10:08:50.534: INFO: Pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085668801s Jun 23 10:08:52.561: INFO: Pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112044465s Jun 23 10:08:54.586: INFO: Pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9": Phase="Running", Reason="", readiness=false. Elapsed: 8.137698311s Jun 23 10:08:56.612: INFO: Pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163481491s [1mSTEP[0m: Saw pod success Jun 23 10:08:56.612: INFO: Pod "downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9" satisfied condition "Succeeded or Failed" Jun 23 10:08:56.637: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:08:56.775: INFO: Waiting for pod downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9 to disappear Jun 23 10:08:56.804: INFO: Pod downward-api-cc9d8ec2-c050-40b1-b59b-b2b5067521f9 no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 ... skipping 17 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-e8be990a-3fd8-42db-8dc4-a8b7817a385c [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:08:46.502: INFO: Waiting up to 5m0s for pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf" in namespace "secrets-8012" to be "Succeeded or Failed" Jun 23 10:08:46.545: INFO: Pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 42.224131ms Jun 23 10:08:48.568: INFO: Pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065654954s Jun 23 10:08:50.597: INFO: Pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094688463s Jun 23 10:08:52.625: INFO: Pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf": Phase="Running", Reason="", readiness=true. Elapsed: 6.122868119s Jun 23 10:08:54.649: INFO: Pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf": Phase="Running", Reason="", readiness=false. Elapsed: 8.146829662s Jun 23 10:08:56.674: INFO: Pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171988768s [1mSTEP[0m: Saw pod success Jun 23 10:08:56.699: INFO: Pod "pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf" satisfied condition "Succeeded or Failed" Jun 23 10:08:56.723: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:08:56.782: INFO: Waiting for pod pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf to disappear Jun 23 10:08:56.805: INFO: Pod pod-secrets-fde0a358-9989-4fd2-ad42-f73677e92fbf no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 ... skipping 38 lines ... [32m• [SLOW TEST:10.544 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should get a host IP [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [90mtest/e2e/storage/utils/framework.go:23[0m should not conflict [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-529d4f33-5958-474f-b791-39d517ebf1a5 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:08:46.281: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3" in namespace "projected-4981" to be "Succeeded or Failed" Jun 23 10:08:46.306: INFO: Pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3": Phase="Pending", Reason="", readiness=false. Elapsed: 23.696087ms Jun 23 10:08:48.330: INFO: Pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048210374s Jun 23 10:08:50.356: INFO: Pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074024921s Jun 23 10:08:52.382: INFO: Pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3": Phase="Running", Reason="", readiness=true. Elapsed: 6.099891243s Jun 23 10:08:54.408: INFO: Pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3": Phase="Running", Reason="", readiness=false. Elapsed: 8.126337674s Jun 23 10:08:56.453: INFO: Pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171474946s [1mSTEP[0m: Saw pod success Jun 23 10:08:56.453: INFO: Pod "pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3" satisfied condition "Succeeded or Failed" Jun 23 10:08:56.488: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:08:56.930: INFO: Waiting for pod pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3 to disappear Jun 23 10:08:56.962: INFO: Pod pod-projected-configmaps-14c0beaa-c13b-4fb0-bfea-0f4f1afdbaa3 no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:08:57.080: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-74a13de2-249b-4b80-8975-e83d5b7c2fce [AfterEach] [sig-node] Secrets test/e2e/framework/framework.go:188 Jun 23 10:08:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-8249" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [32m• [SLOW TEST:11.497 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a replica set. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide container's memory limit [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:08:47.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e" in namespace "downward-api-2134" to be "Succeeded or Failed" Jun 23 10:08:47.352: INFO: Pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.395179ms Jun 23 10:08:49.378: INFO: Pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050258863s Jun 23 10:08:51.404: INFO: Pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075914737s Jun 23 10:08:53.430: INFO: Pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102627589s Jun 23 10:08:55.458: INFO: Pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129746668s Jun 23 10:08:57.484: INFO: Pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156410342s [1mSTEP[0m: Saw pod success Jun 23 10:08:57.484: INFO: Pod "downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e" satisfied condition "Succeeded or Failed" Jun 23 10:08:57.509: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:08:57.572: INFO: Waiting for pod downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e to disappear Jun 23 10:08:57.602: INFO: Pod downwardapi-volume-3fa98327-0b48-4696-b085-b87a0961378e no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/storage/framework.go:23[0m should provide container's memory limit [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":67,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 37 lines ... [90mtest/e2e/apps/framework.go:23[0m should test the lifecycle of a ReplicationController [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-8171957f-29e3-4cc4-abff-8a722f41c4c6 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:08:47.431: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355" in namespace "projected-8553" to be "Succeeded or Failed" Jun 23 10:08:47.455: INFO: Pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355": Phase="Pending", Reason="", readiness=false. Elapsed: 24.32926ms Jun 23 10:08:49.481: INFO: Pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050090806s Jun 23 10:08:51.507: INFO: Pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076542816s Jun 23 10:08:53.535: INFO: Pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104563979s Jun 23 10:08:55.561: INFO: Pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130472883s Jun 23 10:08:57.594: INFO: Pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162926035s [1mSTEP[0m: Saw pod success Jun 23 10:08:57.594: INFO: Pod "pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355" satisfied condition "Succeeded or Failed" Jun 23 10:08:57.623: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355 container projected-configmap-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:08:57.718: INFO: Waiting for pod pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355 to disappear Jun 23 10:08:57.742: INFO: Pod pod-projected-configmaps-b26c6cca-30da-4eeb-b5cf-b3746a5f3355 no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/storage/framework.go:23[0m should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:08:53.391: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Jun 23 10:08:53.598: INFO: Waiting up to 5m0s for pod "pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e" in namespace "emptydir-7760" to be "Succeeded or Failed" Jun 23 10:08:53.630: INFO: Pod "pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.61996ms Jun 23 10:08:55.657: INFO: Pod "pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059052223s Jun 23 10:08:57.684: INFO: Pod "pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086077151s [1mSTEP[0m: Saw pod success Jun 23 10:08:57.684: INFO: Pod "pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e" satisfied condition "Succeeded or Failed" Jun 23 10:08:57.721: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:08:57.846: INFO: Waiting for pod pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e to disappear Jun 23 10:08:57.900: INFO: Pod pod-d26203e1-e6d6-45ce-af4b-17169aab2c2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 Jun 23 10:08:57.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-7760" for this suite. [32m•[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Ingress API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... test/e2e/framework/framework.go:188 Jun 23 10:08:58.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingress-1338" for this suite. [32m•[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":2,"skipped":47,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] PreStop test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 81 lines ... [32m• [SLOW TEST:13.428 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m patching/updating a validating webhook should work [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":1,"skipped":88,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-0cd15948-073b-4aec-9d31-326d12418c34 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:08:56.042: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a" in namespace "projected-93" to be "Succeeded or Failed" Jun 23 10:08:56.067: INFO: Pod "pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.712888ms Jun 23 10:08:58.095: INFO: Pod "pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05315399s Jun 23 10:09:00.149: INFO: Pod "pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107104476s [1mSTEP[0m: Saw pod success Jun 23 10:09:00.149: INFO: Pod "pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a" satisfied condition "Succeeded or Failed" Jun 23 10:09:00.326: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:00.678: INFO: Waiting for pod pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a to disappear Jun 23 10:09:00.711: INFO: Pod pod-projected-secrets-0c514520-fcbe-4171-98ea-5584ecb3d84a no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 ... skipping 46 lines ... [32m• [SLOW TEST:14.336 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should be able to deny custom resource creation, update and deletion [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":157,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":1,"skipped":39,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 51 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-8cea68e6-e934-4370-8763-a799a395a257 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:08:46.615: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2" in namespace "projected-194" to be "Succeeded or Failed" Jun 23 10:08:46.683: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 67.283468ms Jun 23 10:08:48.710: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094133745s Jun 23 10:08:50.735: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119018147s Jun 23 10:08:52.763: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147108581s Jun 23 10:08:54.788: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171631727s Jun 23 10:08:56.811: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Running", Reason="", readiness=true. Elapsed: 10.1949575s Jun 23 10:08:58.903: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Running", Reason="", readiness=true. Elapsed: 12.28720699s Jun 23 10:09:00.932: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.316218926s [1mSTEP[0m: Saw pod success Jun 23 10:09:00.932: INFO: Pod "pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2" satisfied condition "Succeeded or Failed" Jun 23 10:09:00.961: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:01.035: INFO: Waiting for pod pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2 to disappear Jun 23 10:09:01.063: INFO: Pod pod-projected-configmaps-4dc6b112-a2d7-47ed-b22a-b452cd04cfe2 no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 ... skipping 43 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Kubectl server-side dry-run [90mtest/e2e/kubectl/kubectl.go:927[0m should check if kubectl can dry-run update Pods [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [1mSTEP[0m: verifying the pod is in kubernetes [1mSTEP[0m: updating the pod Jun 23 10:08:57.042: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fe6931c2-4c0a-4a80-932b-b58e3cdeadba" Jun 23 10:08:57.042: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fe6931c2-4c0a-4a80-932b-b58e3cdeadba" in namespace "pods-883" to be "terminated due to deadline exceeded" Jun 23 10:08:57.066: INFO: Pod "pod-update-activedeadlineseconds-fe6931c2-4c0a-4a80-932b-b58e3cdeadba": Phase="Running", Reason="", readiness=true. Elapsed: 23.908648ms Jun 23 10:08:59.095: INFO: Pod "pod-update-activedeadlineseconds-fe6931c2-4c0a-4a80-932b-b58e3cdeadba": Phase="Running", Reason="", readiness=true. Elapsed: 2.052577771s Jun 23 10:09:01.147: INFO: Pod "pod-update-activedeadlineseconds-fe6931c2-4c0a-4a80-932b-b58e3cdeadba": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.105207167s Jun 23 10:09:01.147: INFO: Pod "pod-update-activedeadlineseconds-fe6931c2-4c0a-4a80-932b-b58e3cdeadba" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jun 23 10:09:01.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-883" for this suite. ... skipping 3 lines ... [90mtest/e2e/common/node/framework.go:23[0m should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide container's cpu limit [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:08:56.946: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97" in namespace "projected-6449" to be "Succeeded or Failed" Jun 23 10:08:56.977: INFO: Pod "downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97": Phase="Pending", Reason="", readiness=false. Elapsed: 30.269244ms Jun 23 10:08:59.034: INFO: Pod "downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088183148s Jun 23 10:09:01.067: INFO: Pod "downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12097932s [1mSTEP[0m: Saw pod success Jun 23 10:09:01.067: INFO: Pod "downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97" satisfied condition "Succeeded or Failed" Jun 23 10:09:01.106: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:01.211: INFO: Waiting for pod downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97 to disappear Jun 23 10:09:01.244: INFO: Pod downwardapi-volume-ae44038d-e4eb-44f7-9373-6c2e54b2ae97 no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 Jun 23 10:09:01.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-6449" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:02.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-3487" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-architecture] Conformance Tests test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 44 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:03.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-2481" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","total":-1,"completed":4,"skipped":69,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Containers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... Jun 23 10:08:46.793: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test override arguments Jun 23 10:08:47.096: INFO: Waiting up to 5m0s for pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c" in namespace "containers-6975" to be "Succeeded or Failed" Jun 23 10:08:47.159: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 63.699398ms Jun 23 10:08:49.186: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089993208s Jun 23 10:08:51.211: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114819383s Jun 23 10:08:53.235: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139617358s Jun 23 10:08:55.261: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165655524s Jun 23 10:08:57.287: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.190841927s Jun 23 10:08:59.318: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.222207721s Jun 23 10:09:01.350: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.254101846s Jun 23 10:09:03.399: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.303569301s [1mSTEP[0m: Saw pod success Jun 23 10:09:03.399: INFO: Pod "client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c" satisfied condition "Succeeded or Failed" Jun 23 10:09:03.445: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:03.595: INFO: Waiting for pod client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c to disappear Jun 23 10:09:03.625: INFO: Pod client-containers-fbfa7636-5eff-487e-b5e7-970068e0281c no longer exists [AfterEach] [sig-node] Containers test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/node/framework.go:23[0m should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":59,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:08:57.163: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-147aca69-f7aa-481e-9eb5-18f80fbd6dab [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:08:57.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c" in namespace "projected-7651" to be "Succeeded or Failed" Jun 23 10:08:57.410: INFO: Pod "pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.460732ms Jun 23 10:08:59.488: INFO: Pod "pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107767821s Jun 23 10:09:01.514: INFO: Pod "pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134108622s Jun 23 10:09:03.552: INFO: Pod "pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172492556s [1mSTEP[0m: Saw pod success Jun 23 10:09:03.553: INFO: Pod "pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c" satisfied condition "Succeeded or Failed" Jun 23 10:09:03.609: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:03.758: INFO: Waiting for pod pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c to disappear Jun 23 10:09:03.792: INFO: Pod pod-projected-configmaps-5d1ad556-eedb-414f-9d44-f82471093f1c no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":0,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 55 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:04.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-5973" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 33 lines ... [32m• [SLOW TEST:20.760 seconds][0m [sig-api-machinery] Watchers [90mtest/e2e/apimachinery/framework.go:23[0m should observe add, update, and delete watch notifications on configmaps [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 34 lines ... [32m• [SLOW TEST:10.489 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 29 lines ... [32m• [SLOW TEST:10.221 seconds][0m [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should be able to convert a non homogeneous list of CRs [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":3,"skipped":77,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 35 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:08.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-8255" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":3,"skipped":56,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Kubelet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:09.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-7082" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-700dcecf-d8c2-43a8-a94e-cc770e2a67aa [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:09:05.909: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb" in namespace "configmap-8172" to be "Succeeded or Failed" Jun 23 10:09:06.029: INFO: Pod "pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 119.751123ms Jun 23 10:09:08.091: INFO: Pod "pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181679318s Jun 23 10:09:10.142: INFO: Pod "pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.232482064s [1mSTEP[0m: Saw pod success Jun 23 10:09:10.142: INFO: Pod "pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb" satisfied condition "Succeeded or Failed" Jun 23 10:09:10.188: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:10.499: INFO: Waiting for pod pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb to disappear Jun 23 10:09:10.530: INFO: Pod pod-configmaps-6ba5bc03-03b1-4aac-a515-c415bca3c6cb no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:5.547 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-cf7a6eca-19b7-4e27-baea-37ad93a27079 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:08:58.105: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818" in namespace "projected-8310" to be "Succeeded or Failed" Jun 23 10:08:58.147: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818": Phase="Pending", Reason="", readiness=false. Elapsed: 41.121611ms Jun 23 10:09:00.322: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216449703s Jun 23 10:09:02.408: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302608832s Jun 23 10:09:04.528: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422360544s Jun 23 10:09:06.588: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482400754s Jun 23 10:09:08.631: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818": Phase="Pending", Reason="", readiness=false. Elapsed: 10.526087595s Jun 23 10:09:10.684: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.578237142s [1mSTEP[0m: Saw pod success Jun 23 10:09:10.684: INFO: Pod "pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818" satisfied condition "Succeeded or Failed" Jun 23 10:09:10.721: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:10.820: INFO: Waiting for pod pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818 to disappear Jun 23 10:09:10.864: INFO: Pod pod-projected-secrets-8cb256ff-6c26-4676-8f44-4241de123818 no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:13.240 seconds][0m [sig-storage] Projected secret [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":80,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-map-4a50d441-3b76-44d9-a079-daba88bd477c [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:09:04.527: INFO: Waiting up to 5m0s for pod "pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd" in namespace "secrets-4107" to be "Succeeded or Failed" Jun 23 10:09:04.586: INFO: Pod "pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd": Phase="Pending", Reason="", readiness=false. Elapsed: 58.162502ms Jun 23 10:09:06.652: INFO: Pod "pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124786676s Jun 23 10:09:08.694: INFO: Pod "pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166185756s Jun 23 10:09:10.739: INFO: Pod "pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.21145797s [1mSTEP[0m: Saw pod success Jun 23 10:09:10.739: INFO: Pod "pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd" satisfied condition "Succeeded or Failed" Jun 23 10:09:10.773: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:10.930: INFO: Waiting for pod pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd to disappear Jun 23 10:09:10.983: INFO: Pod pod-secrets-2edf3c6c-483f-4cd2-a85d-9e169e3dd9cd no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:7.160 seconds][0m [sig-storage] Secrets [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":86,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [32m• [SLOW TEST:11.456 seconds][0m [sig-network] DNS [90mtest/e2e/network/common/framework.go:23[0m should provide DNS for pods for Hostname [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-e6109489-312a-4980-b3a1-8e06192484d7 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:09:04.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe" in namespace "configmap-1466" to be "Succeeded or Failed" Jun 23 10:09:04.177: INFO: Pod "pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe": Phase="Pending", Reason="", readiness=false. Elapsed: 58.74542ms Jun 23 10:09:06.243: INFO: Pod "pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125046808s Jun 23 10:09:08.302: INFO: Pod "pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe": Phase="Running", Reason="", readiness=true. Elapsed: 4.183618086s Jun 23 10:09:10.434: INFO: Pod "pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe": Phase="Running", Reason="", readiness=false. Elapsed: 6.316099036s Jun 23 10:09:12.481: INFO: Pod "pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.362793615s [1mSTEP[0m: Saw pod success Jun 23 10:09:12.481: INFO: Pod "pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe" satisfied condition "Succeeded or Failed" Jun 23 10:09:12.525: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:12.649: INFO: Waiting for pod pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe to disappear Jun 23 10:09:12.709: INFO: Pod pod-configmaps-ee82671a-3c9f-4c54-9428-e805f8fe49fe no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:9.020 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":86,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:13.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-1749" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":3,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] PodTemplates test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 7 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:14.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-4126" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":6,"skipped":165,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:08:56.906: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename resourcequota [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 23 lines ... [32m• [SLOW TEST:17.631 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with terminating scopes. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 59 lines ... [32m• [SLOW TEST:19.467 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m patching/updating a mutating webhook should work [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":3,"skipped":133,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":5,"skipped":218,"failed":0} [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:04.069: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 13 lines ... [32m• [SLOW TEST:16.586 seconds][0m [sig-node] InitContainer [NodeConformance] [90mtest/e2e/common/node/framework.go:23[0m should invoke init containers on a RestartNever pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":6,"skipped":218,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:09:08.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca" in namespace "downward-api-9719" to be "Succeeded or Failed" Jun 23 10:09:08.878: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca": Phase="Pending", Reason="", readiness=false. Elapsed: 47.358394ms Jun 23 10:09:10.946: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115826543s Jun 23 10:09:13.039: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208617709s Jun 23 10:09:15.093: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262196152s Jun 23 10:09:17.312: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.481200634s Jun 23 10:09:19.360: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.529700295s Jun 23 10:09:21.415: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.584134721s [1mSTEP[0m: Saw pod success Jun 23 10:09:21.415: INFO: Pod "downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca" satisfied condition "Succeeded or Failed" Jun 23 10:09:21.480: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:21.630: INFO: Waiting for pod downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca to disappear Jun 23 10:09:21.677: INFO: Pod downwardapi-volume-07e77548-de40-4ee8-8adf-a8b54beb96ca no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:13.453 seconds][0m [sig-storage] Downward API volume [90mtest/e2e/common/storage/framework.go:23[0m should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":78,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:09.489: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 10:09:09.990: INFO: Waiting up to 5m0s for pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4" in namespace "downward-api-5144" to be "Succeeded or Failed" Jun 23 10:09:10.047: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.574652ms Jun 23 10:09:12.121: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130413861s Jun 23 10:09:14.165: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174189757s Jun 23 10:09:16.293: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302403085s Jun 23 10:09:18.347: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35665687s Jun 23 10:09:20.390: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.399609309s Jun 23 10:09:22.429: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.438244355s [1mSTEP[0m: Saw pod success Jun 23 10:09:22.429: INFO: Pod "downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4" satisfied condition "Succeeded or Failed" Jun 23 10:09:22.482: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:22.684: INFO: Waiting for pod downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4 to disappear Jun 23 10:09:22.723: INFO: Pod downward-api-a10482f6-ddab-4829-8fb4-5cc81db760d4 no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:13.410 seconds][0m [sig-node] Downward API [90mtest/e2e/common/node/framework.go:23[0m should provide pod UID as env vars [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:22.057 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m works for CRD without validation schema [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":3,"skipped":46,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 7 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:23.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-6723" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 34 lines ... [90mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90mtest/e2e/apps/statefulset.go:101[0m should list, patch and delete a collection of StatefulSets [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":7,"skipped":169,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:15.174: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename crd-publish-openapi [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 27 lines ... [32m• [SLOW TEST:14.129 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m works for CRD preserving unknown fields at the schema root [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":8,"skipped":169,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 41 lines ... Jun 23 10:09:24.272: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6674 explain e2e-test-crd-publish-openapi-393-crds.spec' Jun 23 10:09:24.673: INFO: stderr: "" Jun 23 10:09:24.673: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-393-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 23 10:09:24.673: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6674 explain e2e-test-crd-publish-openapi-393-crds.spec.bars' Jun 23 10:09:25.012: INFO: stderr: "" Jun 23 10:09:25.013: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-393-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" [1mSTEP[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jun 23 10:09:25.013: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6674 explain e2e-test-crd-publish-openapi-393-crds.spec.bars2' Jun 23 10:09:25.363: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jun 23 10:09:29.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-6674" for this suite. ... skipping 2 lines ... [32m• [SLOW TEST:32.625 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m works for CRD with validation schema [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":3,"skipped":5,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] Events test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 16 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:30.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-7045" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide container's cpu limit [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:09:11.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4" in namespace "downward-api-3485" to be "Succeeded or Failed" Jun 23 10:09:12.067: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 96.778987ms Jun 23 10:09:14.126: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156309221s Jun 23 10:09:16.219: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249097054s Jun 23 10:09:18.266: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296439442s Jun 23 10:09:20.314: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343897582s Jun 23 10:09:22.355: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.385288244s Jun 23 10:09:24.381: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.411392643s Jun 23 10:09:26.407: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.437332273s Jun 23 10:09:28.433: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.463190338s Jun 23 10:09:30.458: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.487890088s Jun 23 10:09:32.483: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.51255232s [1mSTEP[0m: Saw pod success Jun 23 10:09:32.483: INFO: Pod "downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4" satisfied condition "Succeeded or Failed" Jun 23 10:09:32.506: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:32.633: INFO: Waiting for pod downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4 to disappear Jun 23 10:09:32.656: INFO: Pod downwardapi-volume-3e98678a-9c73-49fa-a835-b5e49ce5aeb4 no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 ... skipping 15 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-4ae4a55c-590c-4de0-ae17-eaf8e9b709f4 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:09:22.559: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06" in namespace "projected-313" to be "Succeeded or Failed" Jun 23 10:09:22.622: INFO: Pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06": Phase="Pending", Reason="", readiness=false. Elapsed: 62.491903ms Jun 23 10:09:24.657: INFO: Pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097399018s Jun 23 10:09:26.686: INFO: Pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126308995s Jun 23 10:09:28.711: INFO: Pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151752493s Jun 23 10:09:30.749: INFO: Pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189172803s Jun 23 10:09:32.774: INFO: Pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214590127s [1mSTEP[0m: Saw pod success Jun 23 10:09:32.774: INFO: Pod "pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06" satisfied condition "Succeeded or Failed" Jun 23 10:09:32.799: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:32.855: INFO: Waiting for pod pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06 to disappear Jun 23 10:09:32.880: INFO: Pod pod-projected-configmaps-70df10c6-4cf7-4d79-9ce7-db228d952b06 no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:10.873 seconds][0m [sig-storage] Projected configMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":107,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] EndpointSlice test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 9 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:33.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-2028" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":9,"skipped":185,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 62 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Kubectl logs [90mtest/e2e/kubectl/kubectl.go:1409[0m should be able to retrieve and filter logs [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":2,"skipped":111,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 91 lines ... [32m• [SLOW TEST:24.032 seconds][0m [sig-api-machinery] Garbage collector [90mtest/e2e/apimachinery/framework.go:23[0m should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":5,"skipped":50,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-test-volume-684383e5-1e45-4ec6-a512-712813691b38 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:09:15.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21" in namespace "configmap-7610" to be "Succeeded or Failed" Jun 23 10:09:15.089: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 79.231405ms Jun 23 10:09:17.310: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300778505s Jun 23 10:09:19.362: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352258538s Jun 23 10:09:21.408: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398472044s Jun 23 10:09:23.446: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436424717s Jun 23 10:09:25.483: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 10.473104825s Jun 23 10:09:27.519: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 12.509175169s Jun 23 10:09:29.546: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 14.535939919s Jun 23 10:09:31.570: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 16.560503119s Jun 23 10:09:33.594: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Pending", Reason="", readiness=false. Elapsed: 18.584229559s Jun 23 10:09:35.618: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.608598951s [1mSTEP[0m: Saw pod success Jun 23 10:09:35.618: INFO: Pod "pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21" satisfied condition "Succeeded or Failed" Jun 23 10:09:35.645: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:35.713: INFO: Waiting for pod pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21 to disappear Jun 23 10:09:35.740: INFO: Pod pod-configmaps-68369b4f-8c8e-4b4d-b004-638b2c175d21 no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:21.209 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:00.963: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename crd-publish-openapi [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 12 lines ... [32m• [SLOW TEST:35.756 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of same group and version but different kinds [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... [1mSTEP[0m: Destroying namespace "services-4054" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Kubelet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [90mtest/e2e/common/node/kubelet.go:43[0m should print the output to logs [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":110,"failed":0} [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:32.724: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 10:09:32.917: INFO: Waiting up to 5m0s for pod "downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56" in namespace "downward-api-5411" to be "Succeeded or Failed" Jun 23 10:09:32.944: INFO: Pod "downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56": Phase="Pending", Reason="", readiness=false. Elapsed: 27.365633ms Jun 23 10:09:34.970: INFO: Pod "downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052567538s Jun 23 10:09:36.995: INFO: Pod "downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077706096s Jun 23 10:09:39.020: INFO: Pod "downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103158083s [1mSTEP[0m: Saw pod success Jun 23 10:09:39.020: INFO: Pod "downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56" satisfied condition "Succeeded or Failed" Jun 23 10:09:39.045: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:39.113: INFO: Waiting for pod downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56 to disappear Jun 23 10:09:39.138: INFO: Pod downward-api-33fc527f-9557-4ac6-aa4a-139cc953fd56 no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.472 seconds][0m [sig-node] Downward API [90mtest/e2e/common/node/framework.go:23[0m should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":110,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:33.087: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/common/node/init_container.go:164 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: creating the pod Jun 23 10:09:33.355: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:188 Jun 23 10:09:39.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-6046" for this suite. [32m• [SLOW TEST:6.762 seconds][0m [sig-node] InitContainer [NodeConformance] [90mtest/e2e/common/node/framework.go:23[0m should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":6,"skipped":129,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:09:21.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c" in namespace "projected-4561" to be "Succeeded or Failed" Jun 23 10:09:21.566: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.654539ms Jun 23 10:09:23.592: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063653526s Jun 23 10:09:25.625: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0966178s Jun 23 10:09:27.652: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124442615s Jun 23 10:09:29.704: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176027909s Jun 23 10:09:31.729: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200900495s Jun 23 10:09:33.754: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.226250361s Jun 23 10:09:35.784: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.25632972s Jun 23 10:09:37.811: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.283463114s Jun 23 10:09:39.838: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.309705826s [1mSTEP[0m: Saw pod success Jun 23 10:09:39.838: INFO: Pod "downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c" satisfied condition "Succeeded or Failed" Jun 23 10:09:39.862: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:39.928: INFO: Waiting for pod downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c to disappear Jun 23 10:09:39.959: INFO: Pod downwardapi-volume-3a2b08e7-e151-4f11-9443-42d70ce78a8c no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/storage/framework.go:23[0m should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":254,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Kubelet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [90mtest/e2e/common/node/kubelet.go:81[0m should have an terminated reason [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":87,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 58 lines ... [32m• [SLOW TEST:6.710 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m Deployment should have a working scale subresource [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":7,"skipped":188,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":196,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:35.959: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test substitution in container's args Jun 23 10:09:36.162: INFO: Waiting up to 5m0s for pod "var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc" in namespace "var-expansion-930" to be "Succeeded or Failed" Jun 23 10:09:36.185: INFO: Pod "var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.256018ms Jun 23 10:09:38.208: INFO: Pod "var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046316458s Jun 23 10:09:40.233: INFO: Pod "var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071337664s Jun 23 10:09:42.263: INFO: Pod "var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100594299s [1mSTEP[0m: Saw pod success Jun 23 10:09:42.263: INFO: Pod "var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc" satisfied condition "Succeeded or Failed" Jun 23 10:09:42.288: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:42.379: INFO: Waiting for pod var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc to disappear Jun 23 10:09:42.407: INFO: Pod var-expansion-0e90bdbd-a862-48c5-8570-065783e0b7bc no longer exists [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.522 seconds][0m [sig-node] Variable Expansion [90mtest/e2e/common/node/framework.go:23[0m should allow substituting values in a container's args [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... [90mtest/e2e/apimachinery/framework.go:23[0m should not be blocked by dependency circle [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods Extended test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:43.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-8885" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 199 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Guestbook application [90mtest/e2e/kubectl/kubectl.go:340[0m should create and stop a working application [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 88 lines ... [32m• [SLOW TEST:10.406 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m binary data should be reflected in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap configmap-3606/configmap-test-88725a82-4e85-41a8-b01e-6695f15b562b [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:09:34.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8" in namespace "configmap-3606" to be "Succeeded or Failed" Jun 23 10:09:34.302: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8": Phase="Pending", Reason="", readiness=false. Elapsed: 25.021156ms Jun 23 10:09:36.327: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049689636s Jun 23 10:09:38.353: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075531134s Jun 23 10:09:40.385: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108405892s Jun 23 10:09:42.414: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137081944s Jun 23 10:09:44.442: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165277353s Jun 23 10:09:46.470: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.193327148s [1mSTEP[0m: Saw pod success Jun 23 10:09:46.471: INFO: Pod "pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8" satisfied condition "Succeeded or Failed" Jun 23 10:09:46.507: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8 container env-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:46.594: INFO: Waiting for pod pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8 to disappear Jun 23 10:09:46.624: INFO: Pod pod-configmaps-cdaa2d39-1181-4b9e-ac53-9b94d68aefe8 no longer exists [AfterEach] [sig-node] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:12.627 seconds][0m [sig-node] ConfigMap [90mtest/e2e/common/node/framework.go:23[0m should be consumable via the environment [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":124,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Runtime test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:40.246: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename container-runtime [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: create the container [1mSTEP[0m: wait for the container to reach Failed [1mSTEP[0m: get the container status [1mSTEP[0m: the container should be terminated [1mSTEP[0m: the termination message should be set Jun 23 10:09:46.771: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime ... skipping 9 lines ... [90mtest/e2e/common/node/runtime.go:43[0m on terminated container [90mtest/e2e/common/node/runtime.go:136[0m should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":105,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:08:59.940: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename deployment [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 57 lines ... [32m• [SLOW TEST:47.851 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m deployment should support rollover [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:40.072: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 10:09:40.277: INFO: Waiting up to 5m0s for pod "downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795" in namespace "downward-api-8549" to be "Succeeded or Failed" Jun 23 10:09:40.316: INFO: Pod "downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795": Phase="Pending", Reason="", readiness=false. Elapsed: 38.856403ms Jun 23 10:09:42.347: INFO: Pod "downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070214314s Jun 23 10:09:44.376: INFO: Pod "downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098989318s Jun 23 10:09:46.403: INFO: Pod "downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125625778s Jun 23 10:09:48.429: INFO: Pod "downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151754624s [1mSTEP[0m: Saw pod success Jun 23 10:09:48.429: INFO: Pod "downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795" satisfied condition "Succeeded or Failed" Jun 23 10:09:48.454: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:48.514: INFO: Waiting for pod downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795 to disappear Jun 23 10:09:48.542: INFO: Pod downward-api-4f9f29bf-adbb-4a1b-ab1d-dd568cdda795 no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.526 seconds][0m [sig-node] Downward API [90mtest/e2e/common/node/framework.go:23[0m should provide host IP as an env var [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":260,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:11.540 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should mutate custom resource [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":4,"skipped":85,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:40.751: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium Jun 23 10:09:41.009: INFO: Waiting up to 5m0s for pod "pod-77600345-e685-4436-ae49-2613690a6d97" in namespace "emptydir-1900" to be "Succeeded or Failed" Jun 23 10:09:41.041: INFO: Pod "pod-77600345-e685-4436-ae49-2613690a6d97": Phase="Pending", Reason="", readiness=false. Elapsed: 32.424443ms Jun 23 10:09:43.079: INFO: Pod "pod-77600345-e685-4436-ae49-2613690a6d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069747032s Jun 23 10:09:45.107: INFO: Pod "pod-77600345-e685-4436-ae49-2613690a6d97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097597018s Jun 23 10:09:47.139: INFO: Pod "pod-77600345-e685-4436-ae49-2613690a6d97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130036811s Jun 23 10:09:49.165: INFO: Pod "pod-77600345-e685-4436-ae49-2613690a6d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156512024s [1mSTEP[0m: Saw pod success Jun 23 10:09:49.166: INFO: Pod "pod-77600345-e685-4436-ae49-2613690a6d97" satisfied condition "Succeeded or Failed" Jun 23 10:09:49.189: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-77600345-e685-4436-ae49-2613690a6d97 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:49.244: INFO: Waiting for pod pod-77600345-e685-4436-ae49-2613690a6d97 to disappear Jun 23 10:09:49.267: INFO: Pod pod-77600345-e685-4436-ae49-2613690a6d97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.577 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":202,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 40 lines ... [32m• [SLOW TEST:28.289 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should be able to change the type from ExternalName to NodePort [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":7,"skipped":61,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 7 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:52.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-2344" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [32m• [SLOW TEST:8.563 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m listing validating webhooks should work [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":7,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 24 lines ... [32m• [SLOW TEST:6.618 seconds][0m [sig-apps] DisruptionController [90mtest/e2e/apps/framework.go:23[0m should update/patch PodDisruptionBudget status [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":3,"skipped":61,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... test/e2e/storage/subpath.go:40 [1mSTEP[0m: Setting up data [It] should support subpaths with secret pod [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating pod pod-subpath-test-secret-v7cl [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 10:09:28.406: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-v7cl" in namespace "subpath-8236" to be "Succeeded or Failed" Jun 23 10:09:28.434: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Pending", Reason="", readiness=false. Elapsed: 27.309264ms Jun 23 10:09:30.458: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051488573s Jun 23 10:09:32.483: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076091178s Jun 23 10:09:34.508: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Running", Reason="", readiness=true. Elapsed: 6.101272539s Jun 23 10:09:36.531: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Running", Reason="", readiness=true. Elapsed: 8.124561014s Jun 23 10:09:38.558: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Running", Reason="", readiness=true. Elapsed: 10.151955927s ... skipping 3 lines ... Jun 23 10:09:46.764: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Running", Reason="", readiness=true. Elapsed: 18.357390633s Jun 23 10:09:48.807: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Running", Reason="", readiness=true. Elapsed: 20.400532006s Jun 23 10:09:50.833: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Running", Reason="", readiness=true. Elapsed: 22.426132144s Jun 23 10:09:52.863: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Running", Reason="", readiness=true. Elapsed: 24.456089089s Jun 23 10:09:54.887: INFO: Pod "pod-subpath-test-secret-v7cl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.480301842s [1mSTEP[0m: Saw pod success Jun 23 10:09:54.887: INFO: Pod "pod-subpath-test-secret-v7cl" satisfied condition "Succeeded or Failed" Jun 23 10:09:54.912: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-subpath-test-secret-v7cl container test-container-subpath-secret-v7cl: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:54.970: INFO: Waiting for pod pod-subpath-test-secret-v7cl to disappear Jun 23 10:09:54.993: INFO: Pod pod-subpath-test-secret-v7cl no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-secret-v7cl Jun 23 10:09:54.993: INFO: Deleting pod "pod-subpath-test-secret-v7cl" in namespace "subpath-8236" ... skipping 8 lines ... [90mtest/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90mtest/e2e/storage/subpath.go:36[0m should support subpaths with secret pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":67,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","total":-1,"completed":3,"skipped":46,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 74 lines ... [32m• [SLOW TEST:32.025 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":64,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [32m• [SLOW TEST:6.443 seconds][0m [sig-apps] ReplicationController [90mtest/e2e/apps/framework.go:23[0m should adopt matching pods on creation [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":12,"skipped":229,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 167 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Update Demo [90mtest/e2e/kubectl/kubectl.go:295[0m should scale a replication controller [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 7 lines ... test/e2e/framework/framework.go:188 Jun 23 10:09:56.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-6401" for this suite. [32m•[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":267,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [32m• [SLOW TEST:8.474 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should support retrieving logs from the container over websockets [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":283,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 6 lines ... [It] should contain environment variables for services [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jun 23 10:09:43.178: INFO: The status of Pod server-envvars-f7f33e35-1a60-464e-b73b-fc55eaeb3774 is Pending, waiting for it to be Running (with Ready = true) Jun 23 10:09:45.204: INFO: The status of Pod server-envvars-f7f33e35-1a60-464e-b73b-fc55eaeb3774 is Pending, waiting for it to be Running (with Ready = true) Jun 23 10:09:47.215: INFO: The status of Pod server-envvars-f7f33e35-1a60-464e-b73b-fc55eaeb3774 is Pending, waiting for it to be Running (with Ready = true) Jun 23 10:09:49.201: INFO: The status of Pod server-envvars-f7f33e35-1a60-464e-b73b-fc55eaeb3774 is Running (Ready = true) Jun 23 10:09:49.279: INFO: Waiting up to 5m0s for pod "client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe" in namespace "pods-6923" to be "Succeeded or Failed" Jun 23 10:09:49.304: INFO: Pod "client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe": Phase="Pending", Reason="", readiness=false. Elapsed: 25.093217ms Jun 23 10:09:51.329: INFO: Pod "client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049969121s Jun 23 10:09:53.353: INFO: Pod "client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074160461s Jun 23 10:09:55.397: INFO: Pod "client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe": Phase="Running", Reason="", readiness=true. Elapsed: 6.117962808s Jun 23 10:09:57.423: INFO: Pod "client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143912753s [1mSTEP[0m: Saw pod success Jun 23 10:09:57.423: INFO: Pod "client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe" satisfied condition "Succeeded or Failed" Jun 23 10:09:57.460: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe container env3cont: <nil> [1mSTEP[0m: delete the pod Jun 23 10:09:57.552: INFO: Waiting for pod client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe to disappear Jun 23 10:09:57.575: INFO: Pod client-envvars-9e510e1e-f385-4944-b980-0f5f257108fe no longer exists [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:14.867 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should contain environment variables for services [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":107,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [90mtest/e2e/common/node/framework.go:23[0m should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":36,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":33,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:08:55.603: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename crd-watch [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 24 lines ... [90mtest/e2e/apimachinery/framework.go:23[0m CustomResourceDefinition Watch [90mtest/e2e/apimachinery/crd_watch.go:44[0m watch on custom resource definition objects [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 14 lines ... [32m• [SLOW TEST:60.692 seconds][0m [sig-node] Probing container [90mtest/e2e/common/node/framework.go:23[0m with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":135,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [90mtest/e2e/apps/framework.go:23[0m should list and delete a collection of ReplicaSets [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":8,"skipped":130,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:09:56.889: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55" in namespace "projected-1400" to be "Succeeded or Failed" Jun 23 10:09:56.920: INFO: Pod "downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55": Phase="Pending", Reason="", readiness=false. Elapsed: 31.068993ms Jun 23 10:09:58.952: INFO: Pod "downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063354797s Jun 23 10:10:01.008: INFO: Pod "downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118669837s [1mSTEP[0m: Saw pod success Jun 23 10:10:01.008: INFO: Pod "downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55" satisfied condition "Succeeded or Failed" Jun 23 10:10:01.125: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:01.301: INFO: Waiting for pod downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55 to disappear Jun 23 10:10:01.360: INFO: Pod downwardapi-volume-7e553c40-0b2c-4d67-99fe-ec96b17c9e55 no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 Jun 23 10:10:01.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-1400" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":73,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 42 lines ... [90mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90mtest/e2e/apps/statefulset.go:101[0m should validate Statefulset Status endpoints [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":8,"skipped":200,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Aggregator test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [90mtest/e2e/apimachinery/framework.go:23[0m Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:6.765 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m pod should support shared volumes between containers [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":14,"skipped":310,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] version v1 test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 44 lines ... [90mtest/e2e/network/common/framework.go:23[0m version v1 [90mtest/e2e/network/proxy.go:74[0m A set of valid responses are returned for both pod and service Proxy [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]","total":-1,"completed":10,"skipped":299,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-6d9f566e-d420-4630-a663-57c3f50c3bc2 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:09:59.269: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb" in namespace "projected-1617" to be "Succeeded or Failed" Jun 23 10:09:59.297: INFO: Pod "pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb": Phase="Pending", Reason="", readiness=false. Elapsed: 27.233622ms Jun 23 10:10:01.350: INFO: Pod "pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080068926s Jun 23 10:10:03.385: INFO: Pod "pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114869506s Jun 23 10:10:05.408: INFO: Pod "pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138718254s [1mSTEP[0m: Saw pod success Jun 23 10:10:05.409: INFO: Pod "pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb" satisfied condition "Succeeded or Failed" Jun 23 10:10:05.432: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:05.488: INFO: Waiting for pod pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb to disappear Jun 23 10:10:05.510: INFO: Pod pod-projected-configmaps-c413f72d-1827-45a7-9b09-7242db2f0deb no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.570 seconds][0m [sig-storage] Projected configMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":51,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 37 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Kubectl replace [90mtest/e2e/kubectl/kubectl.go:1571[0m should update a single-container pod's image [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":5,"skipped":69,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context test/e2e/common/node/security_context.go:48 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jun 23 10:10:01.848: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ba9cb82b-656c-4208-83ce-97d0366a95bc" in namespace "security-context-test-237" to be "Succeeded or Failed" Jun 23 10:10:01.891: INFO: Pod "busybox-privileged-false-ba9cb82b-656c-4208-83ce-97d0366a95bc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.899725ms Jun 23 10:10:03.935: INFO: Pod "busybox-privileged-false-ba9cb82b-656c-4208-83ce-97d0366a95bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087587801s Jun 23 10:10:05.962: INFO: Pod "busybox-privileged-false-ba9cb82b-656c-4208-83ce-97d0366a95bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114285131s Jun 23 10:10:07.988: INFO: Pod "busybox-privileged-false-ba9cb82b-656c-4208-83ce-97d0366a95bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14004735s Jun 23 10:10:07.988: INFO: Pod "busybox-privileged-false-ba9cb82b-656c-4208-83ce-97d0366a95bc" satisfied condition "Succeeded or Failed" Jun 23 10:10:08.015: INFO: Got logs for pod "busybox-privileged-false-ba9cb82b-656c-4208-83ce-97d0366a95bc": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:188 Jun 23 10:10:08.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-237" for this suite. ... skipping 3 lines ... [90mtest/e2e/common/node/framework.go:23[0m When creating a pod with privileged [90mtest/e2e/common/node/security_context.go:234[0m should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":74,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-f70f6b4d-3d1b-45a0-a6e3-59306388e493 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:10:02.503: INFO: Waiting up to 5m0s for pod "pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca" in namespace "secrets-9866" to be "Succeeded or Failed" Jun 23 10:10:02.575: INFO: Pod "pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca": Phase="Pending", Reason="", readiness=false. Elapsed: 71.792188ms Jun 23 10:10:04.600: INFO: Pod "pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09685572s Jun 23 10:10:06.628: INFO: Pod "pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca": Phase="Running", Reason="", readiness=true. Elapsed: 4.125114036s Jun 23 10:10:08.666: INFO: Pod "pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162676907s [1mSTEP[0m: Saw pod success Jun 23 10:10:08.666: INFO: Pod "pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca" satisfied condition "Succeeded or Failed" Jun 23 10:10:08.694: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:08.761: INFO: Waiting for pod pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca to disappear Jun 23 10:10:08.786: INFO: Pod pod-secrets-fdc87d71-46b6-4ded-a0e3-92f278f66fca no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.765 seconds][0m [sig-storage] Secrets [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":243,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 28 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Kubectl run pod [90mtest/e2e/kubectl/kubectl.go:1537[0m should create a pod from an image when restart is Never [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":9,"skipped":140,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 46 lines ... [90mtest/e2e/network/common/framework.go:23[0m should be able to create a functioning NodePort service [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":6,"skipped":111,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [32m• [SLOW TEST:14.656 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should be able to deny attaching pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":6,"skipped":132,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [32m• [SLOW TEST:11.808 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a service. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":15,"skipped":319,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [1mSTEP[0m: retrieving the pod [1mSTEP[0m: looking for the results for each expected name from probers Jun 23 10:09:52.932: INFO: File wheezy_udp@dns-test-service-3.dns-1505.svc.cluster.local from pod dns-1505/dns-test-8b44e890-f178-444c-a41e-dd18a99a774d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 10:09:52.989: INFO: File jessie_udp@dns-test-service-3.dns-1505.svc.cluster.local from pod dns-1505/dns-test-8b44e890-f178-444c-a41e-dd18a99a774d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 10:09:52.989: INFO: Lookups using dns-1505/dns-test-8b44e890-f178-444c-a41e-dd18a99a774d failed for: [wheezy_udp@dns-test-service-3.dns-1505.svc.cluster.local jessie_udp@dns-test-service-3.dns-1505.svc.cluster.local] Jun 23 10:09:58.018: INFO: File wheezy_udp@dns-test-service-3.dns-1505.svc.cluster.local from pod dns-1505/dns-test-8b44e890-f178-444c-a41e-dd18a99a774d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 10:09:58.048: INFO: File jessie_udp@dns-test-service-3.dns-1505.svc.cluster.local from pod dns-1505/dns-test-8b44e890-f178-444c-a41e-dd18a99a774d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 23 10:09:58.048: INFO: Lookups using dns-1505/dns-test-8b44e890-f178-444c-a41e-dd18a99a774d failed for: [wheezy_udp@dns-test-service-3.dns-1505.svc.cluster.local jessie_udp@dns-test-service-3.dns-1505.svc.cluster.local] Jun 23 10:10:03.178: INFO: DNS probes using dns-test-8b44e890-f178-444c-a41e-dd18a99a774d succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: changing the service to type=ClusterIP [1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1505.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1505.svc.cluster.local; sleep 1; done ... skipping 17 lines ... [32m• [SLOW TEST:61.851 seconds][0m [sig-network] DNS [90mtest/e2e/network/common/framework.go:23[0m should provide DNS for ExternalName services [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":4,"skipped":63,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... test/e2e/framework/framework.go:188 Jun 23 10:10:16.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-4914" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":5,"skipped":68,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:10:06.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee" in namespace "projected-1465" to be "Succeeded or Failed" Jun 23 10:10:06.759: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 26.442666ms Jun 23 10:10:08.784: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051279425s Jun 23 10:10:10.827: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094448516s Jun 23 10:10:12.854: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121310545s Jun 23 10:10:14.880: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147684693s Jun 23 10:10:16.907: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.174346152s Jun 23 10:10:18.931: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 12.198889718s Jun 23 10:10:20.962: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 14.229550418s Jun 23 10:10:22.988: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.25556021s [1mSTEP[0m: Saw pod success Jun 23 10:10:22.988: INFO: Pod "downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee" satisfied condition "Succeeded or Failed" Jun 23 10:10:23.011: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:23.081: INFO: Waiting for pod downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee to disappear Jun 23 10:10:23.105: INFO: Pod downwardapi-volume-46e4f5aa-8621-4816-9752-8b9c2a97d7ee no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:16.659 seconds][0m [sig-storage] Projected downwardAPI [90mtest/e2e/common/storage/framework.go:23[0m should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":76,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... test/e2e/framework/framework.go:188 Jun 23 10:10:24.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-2652" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":7,"skipped":172,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:28.510 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a configMap. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":5,"skipped":76,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:24.148: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 22 lines ... test/e2e/framework/framework.go:188 Jun 23 10:10:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-7377" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":6,"skipped":76,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... [32m• [SLOW TEST:46.280 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of same group but different versions [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":6,"skipped":117,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 48 lines ... [32m• [SLOW TEST:36.202 seconds][0m [sig-network] EndpointSlice [90mtest/e2e/network/common/framework.go:23[0m should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":9,"skipped":115,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 157 lines ... [90mtest/e2e/common/node/framework.go:23[0m when create a pod with lifecycle hook [90mtest/e2e/common/node/lifecycle_hook.go:46[0m should execute poststart exec hook properly [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":153,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... test/e2e/framework/framework.go:188 Jun 23 10:10:42.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-5087" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":4,"skipped":163,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... test/e2e/framework/framework.go:188 Jun 23 10:10:43.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-5549" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":5,"skipped":178,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 52 lines ... [32m• [SLOW TEST:45.948 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should serve multiport endpoints from pods [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 141 lines ... [32m• [SLOW TEST:44.087 seconds][0m [sig-api-machinery] Garbage collector [90mtest/e2e/apimachinery/framework.go:23[0m should orphan pods created by rc if delete options say so [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":10,"skipped":286,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:12.508: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 23 10:10:12.701: INFO: Waiting up to 5m0s for pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b" in namespace "security-context-7607" to be "Succeeded or Failed" Jun 23 10:10:12.724: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.733646ms Jun 23 10:10:14.756: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055275975s Jun 23 10:10:16.788: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087113396s Jun 23 10:10:18.813: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111842672s Jun 23 10:10:20.837: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135534922s Jun 23 10:10:22.864: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.16302195s ... skipping 10 lines ... Jun 23 10:10:45.147: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.44598186s Jun 23 10:10:47.185: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.483340439s Jun 23 10:10:49.208: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.507311651s Jun 23 10:10:51.232: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.530530424s Jun 23 10:10:53.255: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.553731301s [1mSTEP[0m: Saw pod success Jun 23 10:10:53.255: INFO: Pod "security-context-1fc26469-db89-491a-8436-fdc605c4794b" satisfied condition "Succeeded or Failed" Jun 23 10:10:53.278: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod security-context-1fc26469-db89-491a-8436-fdc605c4794b container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:53.403: INFO: Waiting for pod security-context-1fc26469-db89-491a-8436-fdc605c4794b to disappear Jun 23 10:10:53.426: INFO: Pod security-context-1fc26469-db89-491a-8436-fdc605c4794b no longer exists [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:188 ... skipping 15 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-70de47f9-958c-4eb4-bae7-08401f09da01 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:10:29.049: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb" in namespace "projected-2209" to be "Succeeded or Failed" Jun 23 10:10:29.076: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.255097ms Jun 23 10:10:31.101: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051481811s Jun 23 10:10:33.126: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076747202s Jun 23 10:10:35.151: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101578153s Jun 23 10:10:37.176: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126795367s Jun 23 10:10:39.201: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.151194087s ... skipping 2 lines ... Jun 23 10:10:45.278: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.22838569s Jun 23 10:10:47.304: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.25420488s Jun 23 10:10:49.328: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.278911596s Jun 23 10:10:51.354: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.304290426s Jun 23 10:10:53.378: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.328327031s [1mSTEP[0m: Saw pod success Jun 23 10:10:53.378: INFO: Pod "pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb" satisfied condition "Succeeded or Failed" Jun 23 10:10:53.405: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:53.489: INFO: Waiting for pod pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb to disappear Jun 23 10:10:53.514: INFO: Pod pod-projected-secrets-65d4e4a8-1732-45b7-8246-7ad001e129eb no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:24.743 seconds][0m [sig-storage] Projected secret [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":122,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] EndpointSliceMirroring test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... test/e2e/framework/framework.go:188 Jun 23 10:10:53.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslicemirroring-8348" for this suite. [32m•[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":11,"skipped":292,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] Events API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 14 lines ... test/e2e/framework/framework.go:188 Jun 23 10:10:54.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-8982" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":12,"skipped":299,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-test-volume-35fcb69e-9d71-4e50-9871-7b70752ff107 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:10:09.843: INFO: Waiting up to 5m0s for pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87" in namespace "configmap-944" to be "Succeeded or Failed" Jun 23 10:10:09.884: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 40.285999ms Jun 23 10:10:11.910: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066825783s Jun 23 10:10:13.933: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089398894s Jun 23 10:10:15.958: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114602551s Jun 23 10:10:17.985: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141945882s Jun 23 10:10:20.013: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169271871s ... skipping 12 lines ... Jun 23 10:10:46.330: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 36.486480225s Jun 23 10:10:48.353: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 38.5095751s Jun 23 10:10:50.376: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 40.532358999s Jun 23 10:10:52.403: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 42.559196406s Jun 23 10:10:54.426: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.582652478s [1mSTEP[0m: Saw pod success Jun 23 10:10:54.426: INFO: Pod "pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87" satisfied condition "Succeeded or Failed" Jun 23 10:10:54.448: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87 container configmap-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:54.512: INFO: Waiting for pod pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87 to disappear Jun 23 10:10:54.536: INFO: Pod pod-configmaps-4fbc0805-03ab-4fde-a365-7e519fea8b87 no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:45.106 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":178,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... [32m• [SLOW TEST:50.485 seconds][0m [sig-network] DNS [90mtest/e2e/network/common/framework.go:23[0m should provide /etc/hosts entries for the cluster [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]","total":-1,"completed":4,"skipped":76,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Containers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... [32m• [SLOW TEST:40.372 seconds][0m [sig-node] Containers [90mtest/e2e/common/node/framework.go:23[0m should use the image defaults if command and args are blank [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":109,"failed":0} [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:28.038: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-f5b5e932-cbdd-4f76-b3ac-a8e0a24c1d08 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:10:28.276: INFO: Waiting up to 5m0s for pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f" in namespace "configmap-2958" to be "Succeeded or Failed" Jun 23 10:10:28.299: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.192476ms Jun 23 10:10:30.327: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051310574s Jun 23 10:10:32.353: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076540428s Jun 23 10:10:34.377: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100994534s Jun 23 10:10:36.402: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125733008s Jun 23 10:10:38.426: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150140527s ... skipping 5 lines ... Jun 23 10:10:50.578: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.301695435s Jun 23 10:10:52.602: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.32566621s Jun 23 10:10:54.626: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.350051392s Jun 23 10:10:56.652: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.375511158s Jun 23 10:10:58.684: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.407639632s [1mSTEP[0m: Saw pod success Jun 23 10:10:58.684: INFO: Pod "pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f" satisfied condition "Succeeded or Failed" Jun 23 10:10:58.710: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:58.790: INFO: Waiting for pod pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f to disappear Jun 23 10:10:58.821: INFO: Pod pod-configmaps-090075da-8448-4f52-a29d-f6f2a4027b9f no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:30.859 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":109,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:54.783: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test env composition Jun 23 10:10:55.028: INFO: Waiting up to 5m0s for pod "var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1" in namespace "var-expansion-3013" to be "Succeeded or Failed" Jun 23 10:10:55.051: INFO: Pod "var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.419802ms Jun 23 10:10:57.075: INFO: Pod "var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046417883s Jun 23 10:10:59.098: INFO: Pod "var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069822608s [1mSTEP[0m: Saw pod success Jun 23 10:10:59.098: INFO: Pod "var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1" satisfied condition "Succeeded or Failed" Jun 23 10:10:59.123: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1 container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:59.186: INFO: Waiting for pod var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1 to disappear Jun 23 10:10:59.209: INFO: Pod var-expansion-28902180-0be9-4bc3-b3f5-c2e12d3d07d1 no longer exists [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jun 23 10:10:59.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-3013" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":221,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":7,"skipped":177,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:38.828: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 23 10:10:39.022: INFO: Waiting up to 5m0s for pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae" in namespace "emptydir-6707" to be "Succeeded or Failed" Jun 23 10:10:39.047: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 24.341655ms Jun 23 10:10:41.073: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050744657s Jun 23 10:10:43.098: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075993232s Jun 23 10:10:45.124: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101962283s Jun 23 10:10:47.149: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126909291s Jun 23 10:10:49.176: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153167808s Jun 23 10:10:51.201: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 12.17841666s Jun 23 10:10:53.226: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 14.203234245s Jun 23 10:10:55.252: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 16.229175135s Jun 23 10:10:57.279: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Pending", Reason="", readiness=false. Elapsed: 18.257024141s Jun 23 10:10:59.308: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.285754176s [1mSTEP[0m: Saw pod success Jun 23 10:10:59.308: INFO: Pod "pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae" satisfied condition "Succeeded or Failed" Jun 23 10:10:59.333: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:59.397: INFO: Waiting for pod pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae to disappear Jun 23 10:10:59.422: INFO: Pod pod-c7cc9784-84fe-45eb-a2de-96cdea7c51ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:20.654 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":177,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Job test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 28 lines ... [32m• [SLOW TEST:35.533 seconds][0m [sig-apps] Job [90mtest/e2e/apps/framework.go:23[0m should adopt matching orphans and release non-matching pods [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":8,"skipped":177,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":136,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:53.499: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount projected service account token [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test service account token: Jun 23 10:10:53.688: INFO: Waiting up to 5m0s for pod "test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875" in namespace "svcaccounts-4531" to be "Succeeded or Failed" Jun 23 10:10:53.711: INFO: Pod "test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875": Phase="Pending", Reason="", readiness=false. Elapsed: 22.582252ms Jun 23 10:10:55.736: INFO: Pod "test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047229231s Jun 23 10:10:57.761: INFO: Pod "test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072136179s Jun 23 10:10:59.787: INFO: Pod "test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097682685s [1mSTEP[0m: Saw pod success Jun 23 10:10:59.787: INFO: Pod "test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875" satisfied condition "Succeeded or Failed" Jun 23 10:10:59.812: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:10:59.878: INFO: Waiting for pod test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875 to disappear Jun 23 10:10:59.911: INFO: Pod test-pod-f5bbfc6c-7bbc-45d9-9179-317e6c1fd875 no longer exists [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.470 seconds][0m [sig-auth] ServiceAccounts [90mtest/e2e/auth/framework.go:23[0m should mount projected service account token [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":8,"skipped":136,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:00.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-6756" for this suite. [32m•[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":9,"skipped":185,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [90mtest/e2e/apps/framework.go:23[0m should observe PodDisruptionBudget status updated [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":11,"skipped":164,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:00.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-5821" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":10,"skipped":188,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 16 lines ... [32m• [SLOW TEST:120.424 seconds][0m [sig-apps] CronJob [90mtest/e2e/apps/framework.go:23[0m should schedule multiple jobs concurrently [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 10 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:04.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-4792" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":12,"skipped":213,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:10.171 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should mutate pod and apply defaults after mutation [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":13,"skipped":315,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 60 lines ... [32m• [SLOW TEST:56.964 seconds][0m [sig-apps] DisruptionController [90mtest/e2e/apps/framework.go:23[0m should block an eviction until the PDB is updated to allow it [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:58.994: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Jun 23 10:10:59.202: INFO: Waiting up to 5m0s for pod "pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d" in namespace "emptydir-2312" to be "Succeeded or Failed" Jun 23 10:10:59.225: INFO: Pod "pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.53497ms Jun 23 10:11:01.253: INFO: Pod "pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05102945s Jun 23 10:11:03.285: INFO: Pod "pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083486726s Jun 23 10:11:05.312: INFO: Pod "pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109699606s [1mSTEP[0m: Saw pod success Jun 23 10:11:05.312: INFO: Pod "pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d" satisfied condition "Succeeded or Failed" Jun 23 10:11:05.337: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:05.404: INFO: Waiting for pod pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d to disappear Jun 23 10:11:05.431: INFO: Pod pod-d7b75d5e-5f10-4de9-8f11-9988b747b63d no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.495 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":127,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 74 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context test/e2e/common/node/security_context.go:48 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jun 23 10:10:59.744: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-761d8657-5fc9-4c8e-8f2a-b5577cb6b7ce" in namespace "security-context-test-4596" to be "Succeeded or Failed" Jun 23 10:10:59.781: INFO: Pod "busybox-readonly-false-761d8657-5fc9-4c8e-8f2a-b5577cb6b7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 37.662799ms Jun 23 10:11:01.806: INFO: Pod "busybox-readonly-false-761d8657-5fc9-4c8e-8f2a-b5577cb6b7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062673707s Jun 23 10:11:03.832: INFO: Pod "busybox-readonly-false-761d8657-5fc9-4c8e-8f2a-b5577cb6b7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087902643s Jun 23 10:11:05.859: INFO: Pod "busybox-readonly-false-761d8657-5fc9-4c8e-8f2a-b5577cb6b7ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115441771s Jun 23 10:11:05.859: INFO: Pod "busybox-readonly-false-761d8657-5fc9-4c8e-8f2a-b5577cb6b7ce" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:188 Jun 23 10:11:05.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-4596" for this suite. ... skipping 2 lines ... [90mtest/e2e/common/node/framework.go:23[0m When creating a pod with readOnlyRootFilesystem [90mtest/e2e/common/node/security_context.go:173[0m should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":182,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0} [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:05.514: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename services [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 34 lines ... [1mSTEP[0m: Destroying namespace "services-8701" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 [32m•[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":8,"skipped":170,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [32m• [SLOW TEST:82.006 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m updates should be reflected in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":127,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... Jun 23 10:08:54.855: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 23 10:08:56.855: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 23 10:08:56.878: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=services-502 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 23 10:08:57.640: INFO: rc: 7 Jun 23 10:08:57.671: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 23 10:08:57.706: INFO: Pod kube-proxy-mode-detector no longer exists Jun 23 10:08:57.706: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=services-502 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: Command stdout: stderr: + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode command terminated with exit code 7 error: exit status 7 [1mSTEP[0m: creating service affinity-nodeport-timeout in namespace services-502 [1mSTEP[0m: creating replication controller affinity-nodeport-timeout in namespace services-502 I0623 10:08:57.776616 39685 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-502, replica count: 3 I0623 10:09:00.829482 39685 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0623 10:09:03.830252 39685 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady ... skipping 56 lines ... [90mtest/e2e/network/common/framework.go:23[0m should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":26,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 82 lines ... [32m• [SLOW TEST:26.456 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should run through the lifecycle of Pods and PodStatus [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":6,"skipped":180,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 76 lines ... [90mtest/e2e/network/common/framework.go:23[0m should serve a basic endpoint from pods [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide container's memory request [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:11:04.521: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d" in namespace "downward-api-7601" to be "Succeeded or Failed" Jun 23 10:11:04.547: INFO: Pod "downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.040452ms Jun 23 10:11:06.573: INFO: Pod "downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052096541s Jun 23 10:11:08.598: INFO: Pod "downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077123856s Jun 23 10:11:10.654: INFO: Pod "downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133145541s [1mSTEP[0m: Saw pod success Jun 23 10:11:10.654: INFO: Pod "downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d" satisfied condition "Succeeded or Failed" Jun 23 10:11:10.702: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:10.887: INFO: Waiting for pod downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d to disappear Jun 23 10:11:10.955: INFO: Pod downwardapi-volume-8b366ad5-0b01-46d7-b6c5-02a54624626d no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.732 seconds][0m [sig-storage] Downward API volume [90mtest/e2e/common/storage/framework.go:23[0m should provide container's memory request [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":226,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:11.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "cronjob-9101" for this suite. [32m•[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:01.743: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium Jun 23 10:11:01.953: INFO: Waiting up to 5m0s for pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645" in namespace "emptydir-1101" to be "Succeeded or Failed" Jun 23 10:11:01.986: INFO: Pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645": Phase="Pending", Reason="", readiness=false. Elapsed: 33.68867ms Jun 23 10:11:04.012: INFO: Pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059771133s Jun 23 10:11:06.039: INFO: Pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086312039s Jun 23 10:11:08.074: INFO: Pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121694488s Jun 23 10:11:10.124: INFO: Pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171544991s Jun 23 10:11:12.168: INFO: Pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.215687021s [1mSTEP[0m: Saw pod success Jun 23 10:11:12.168: INFO: Pod "pod-116b73ac-199e-40b8-8ffb-90054a8c2645" satisfied condition "Succeeded or Failed" Jun 23 10:11:12.204: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-116b73ac-199e-40b8-8ffb-90054a8c2645 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:12.286: INFO: Waiting for pod pod-116b73ac-199e-40b8-8ffb-90054a8c2645 to disappear Jun 23 10:11:12.323: INFO: Pod pod-116b73ac-199e-40b8-8ffb-90054a8c2645 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:10.652 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [32m• [SLOW TEST:46.784 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m deployment should delete old replica sets [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":7,"skipped":186,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-4b406a57-cbe9-4add-931b-622567084dc0 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:04.702: INFO: Waiting up to 5m0s for pod "pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b" in namespace "secrets-9945" to be "Succeeded or Failed" Jun 23 10:11:04.726: INFO: Pod "pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.479105ms Jun 23 10:11:06.753: INFO: Pod "pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050957646s Jun 23 10:11:08.782: INFO: Pod "pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080533195s Jun 23 10:11:10.824: INFO: Pod "pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122233593s Jun 23 10:11:12.854: INFO: Pod "pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.152346505s [1mSTEP[0m: Saw pod success Jun 23 10:11:12.854: INFO: Pod "pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b" satisfied condition "Succeeded or Failed" Jun 23 10:11:12.882: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b container secret-env-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:12.952: INFO: Waiting for pod pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b to disappear Jun 23 10:11:12.978: INFO: Pod pod-secrets-4b23b33c-27c0-42c1-9226-517169be1b1b no longer exists [AfterEach] [sig-node] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.570 seconds][0m [sig-node] Secrets [90mtest/e2e/common/node/framework.go:23[0m should be consumable from pods in env vars [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":318,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] Events API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:13.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-642" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":8,"skipped":188,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [32m• [SLOW TEST:16.237 seconds][0m [sig-apps] ReplicaSet [90mtest/e2e/apps/framework.go:23[0m Replace and Patch tests [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":7,"skipped":172,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Job test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:10:56.244: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename job [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a job [1mSTEP[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job test/e2e/framework/framework.go:188 Jun 23 10:11:14.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-8910" for this suite. [32m• [SLOW TEST:18.272 seconds][0m [sig-apps] Job [90mtest/e2e/apps/framework.go:23[0m should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":5,"skipped":81,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 41 lines ... [90mtest/e2e/apps/framework.go:23[0m should validate Replicaset Status endpoints [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":10,"skipped":203,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 54 lines ... [90mtest/e2e/common/node/runtime.go:43[0m when starting a container that exits [90mtest/e2e/common/node/runtime.go:44[0m should run with the expected status [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":321,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Containers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:00.255: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test override command Jun 23 10:11:00.469: INFO: Waiting up to 5m0s for pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1" in namespace "containers-8783" to be "Succeeded or Failed" Jun 23 10:11:00.499: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 29.685961ms Jun 23 10:11:02.551: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082201738s Jun 23 10:11:04.576: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107062027s Jun 23 10:11:06.603: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134192807s Jun 23 10:11:08.628: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159255413s Jun 23 10:11:10.658: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188711257s Jun 23 10:11:12.685: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.21639574s Jun 23 10:11:14.715: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.246518877s Jun 23 10:11:16.750: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.281109202s [1mSTEP[0m: Saw pod success Jun 23 10:11:16.750: INFO: Pod "client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1" satisfied condition "Succeeded or Failed" Jun 23 10:11:16.774: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:16.860: INFO: Waiting for pod client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1 to disappear Jun 23 10:11:16.894: INFO: Pod client-containers-7c498329-d329-4a2a-8263-e8ae8742b2b1 no longer exists [AfterEach] [sig-node] Containers test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:16.704 seconds][0m [sig-node] Containers [90mtest/e2e/common/node/framework.go:23[0m should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":180,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 36 lines ... [32m• [SLOW TEST:12.191 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should include webhook resources in discovery documents [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":100,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] IngressClass API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:17.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingressclass-3785" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":10,"skipped":187,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:09.087: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 23 10:11:09.323: INFO: Waiting up to 5m0s for pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56" in namespace "security-context-2817" to be "Succeeded or Failed" Jun 23 10:11:09.356: INFO: Pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56": Phase="Pending", Reason="", readiness=false. Elapsed: 33.327133ms Jun 23 10:11:11.384: INFO: Pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061749565s Jun 23 10:11:13.409: INFO: Pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086592538s Jun 23 10:11:15.442: INFO: Pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11937496s Jun 23 10:11:17.473: INFO: Pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150520174s Jun 23 10:11:19.499: INFO: Pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176078053s [1mSTEP[0m: Saw pod success Jun 23 10:11:19.499: INFO: Pod "security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56" satisfied condition "Succeeded or Failed" Jun 23 10:11:19.524: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:19.594: INFO: Waiting for pod security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56 to disappear Jun 23 10:11:19.622: INFO: Pod security-context-28e2a1c6-a5be-4901-bc36-da15fda1ec56 no longer exists [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:10.595 seconds][0m [sig-node] Security Context [90mtest/e2e/node/framework.go:23[0m should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":203,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide podname only [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:11:11.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01" in namespace "projected-7052" to be "Succeeded or Failed" Jun 23 10:11:11.623: INFO: Pod "downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01": Phase="Pending", Reason="", readiness=false. Elapsed: 29.232694ms Jun 23 10:11:13.648: INFO: Pod "downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054365979s Jun 23 10:11:15.674: INFO: Pod "downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080364606s Jun 23 10:11:17.708: INFO: Pod "downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114578294s Jun 23 10:11:19.737: INFO: Pod "downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144203837s [1mSTEP[0m: Saw pod success Jun 23 10:11:19.738: INFO: Pod "downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01" satisfied condition "Succeeded or Failed" Jun 23 10:11:19.768: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:19.860: INFO: Waiting for pod downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01 to disappear Jun 23 10:11:19.886: INFO: Pod downwardapi-volume-0a76d760-56a4-4ce1-8f65-01d460bc6b01 no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.632 seconds][0m [sig-storage] Projected downwardAPI [90mtest/e2e/common/storage/framework.go:23[0m should provide podname only [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":276,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] version v1 test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 44 lines ... [90mtest/e2e/network/common/framework.go:23[0m version v1 [90mtest/e2e/network/proxy.go:74[0m A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":9,"skipped":209,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... [32m• [SLOW TEST:85.682 seconds][0m [sig-storage] Projected secret [90mtest/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":67,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 16 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:20.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-3067" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":10,"skipped":236,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... Jun 23 10:11:13.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 23 10:11:15.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 23 10:11:17.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 10, 11, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} [1mSTEP[0m: Deploying the webhook service [1mSTEP[0m: Verifying the service has paired with the endpoint Jun 23 10:11:20.725: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API [1mSTEP[0m: create a namespace for the webhook [1mSTEP[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jun 23 10:11:20.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-1469" for this suite. ... skipping 2 lines ... test/e2e/apimachinery/webhook.go:104 [32m• [SLOW TEST:12.244 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should unconditionally reject operations on fail closed webhook [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":102,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:15.033: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0666 on tmpfs Jun 23 10:11:15.301: INFO: Waiting up to 5m0s for pod "pod-c4d45237-ba0b-42cc-8510-97b2bc60c042" in namespace "emptydir-4027" to be "Succeeded or Failed" Jun 23 10:11:15.337: INFO: Pod "pod-c4d45237-ba0b-42cc-8510-97b2bc60c042": Phase="Pending", Reason="", readiness=false. Elapsed: 35.6324ms Jun 23 10:11:17.373: INFO: Pod "pod-c4d45237-ba0b-42cc-8510-97b2bc60c042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071920593s Jun 23 10:11:19.396: INFO: Pod "pod-c4d45237-ba0b-42cc-8510-97b2bc60c042": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095309382s Jun 23 10:11:21.419: INFO: Pod "pod-c4d45237-ba0b-42cc-8510-97b2bc60c042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118300771s [1mSTEP[0m: Saw pod success Jun 23 10:11:21.420: INFO: Pod "pod-c4d45237-ba0b-42cc-8510-97b2bc60c042" satisfied condition "Succeeded or Failed" Jun 23 10:11:21.442: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-c4d45237-ba0b-42cc-8510-97b2bc60c042 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:21.502: INFO: Waiting for pod pod-c4d45237-ba0b-42cc-8510-97b2bc60c042 to disappear Jun 23 10:11:21.524: INFO: Pod pod-c4d45237-ba0b-42cc-8510-97b2bc60c042 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.544 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":102,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 74 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap configmap-9595/configmap-test-4c2ff5bd-45b9-4452-ad75-fe109755b4b8 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:11:11.561: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f" in namespace "configmap-9595" to be "Succeeded or Failed" Jun 23 10:11:11.599: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.08531ms Jun 23 10:11:13.628: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066952828s Jun 23 10:11:15.654: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092289004s Jun 23 10:11:17.684: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122979393s Jun 23 10:11:19.713: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151246109s Jun 23 10:11:21.739: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177305889s Jun 23 10:11:23.765: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.203785966s [1mSTEP[0m: Saw pod success Jun 23 10:11:23.765: INFO: Pod "pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f" satisfied condition "Succeeded or Failed" Jun 23 10:11:23.793: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f container env-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:23.862: INFO: Waiting for pod pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f to disappear Jun 23 10:11:23.886: INFO: Pod pod-configmaps-5bab6be3-0c96-4bc3-9600-0f25b372280f no longer exists [AfterEach] [sig-node] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:12.658 seconds][0m [sig-node] ConfigMap [90mtest/e2e/common/node/framework.go:23[0m should be consumable via environment variable [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":95,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... Jun 23 10:11:11.904: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Setting timeout (1s) shorter than webhook latency (5s) [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) [1mSTEP[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Having no error when timeout is longer than webhook latency [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Having no error when timeout is empty (defaulted to 10s in v1) [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jun 23 10:11:24.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-6248" for this suite. [1mSTEP[0m: Destroying namespace "webhook-6248-markers" for this suite. ... skipping 4 lines ... [32m• [SLOW TEST:25.590 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should honor timeout [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":12,"skipped":233,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:11.451 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a replication controller. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":8,"skipped":175,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Containers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:21.340: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test override all Jun 23 10:11:21.535: INFO: Waiting up to 5m0s for pod "client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242" in namespace "containers-8720" to be "Succeeded or Failed" Jun 23 10:11:21.559: INFO: Pod "client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242": Phase="Pending", Reason="", readiness=false. Elapsed: 23.875193ms Jun 23 10:11:23.596: INFO: Pod "client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060355654s Jun 23 10:11:25.621: INFO: Pod "client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085176765s [1mSTEP[0m: Saw pod success Jun 23 10:11:25.621: INFO: Pod "client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242" satisfied condition "Succeeded or Failed" Jun 23 10:11:25.646: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:25.705: INFO: Waiting for pod client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242 to disappear Jun 23 10:11:25.729: INFO: Pod client-containers-f1f9032a-0178-4d94-82cd-69835b7d2242 no longer exists [AfterEach] [sig-node] Containers test/e2e/framework/framework.go:188 Jun 23 10:11:25.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "containers-8720" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":71,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:17.602: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward api env vars Jun 23 10:11:17.823: INFO: Waiting up to 5m0s for pod "downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e" in namespace "downward-api-7942" to be "Succeeded or Failed" Jun 23 10:11:17.850: INFO: Pod "downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.275211ms Jun 23 10:11:19.883: INFO: Pod "downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058970571s Jun 23 10:11:21.909: INFO: Pod "downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085334364s Jun 23 10:11:23.934: INFO: Pod "downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110016483s Jun 23 10:11:25.966: INFO: Pod "downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142335151s [1mSTEP[0m: Saw pod success Jun 23 10:11:25.967: INFO: Pod "downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e" satisfied condition "Succeeded or Failed" Jun 23 10:11:25.994: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:26.087: INFO: Waiting for pod downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e to disappear Jun 23 10:11:26.137: INFO: Pod downward-api-55cd5196-6f3d-44aa-8a75-8166d80bb59e no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.640 seconds][0m [sig-node] Downward API [90mtest/e2e/common/node/framework.go:23[0m should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":146,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-02ee2dc5-546f-45af-b9d2-2953c8ff4beb [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:10.160: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1" in namespace "projected-7369" to be "Succeeded or Failed" Jun 23 10:11:10.224: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 63.255685ms Jun 23 10:11:12.258: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097878755s Jun 23 10:11:14.286: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125068246s Jun 23 10:11:16.311: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15074977s Jun 23 10:11:18.339: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178310972s Jun 23 10:11:20.365: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204888862s Jun 23 10:11:22.392: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.231893747s Jun 23 10:11:24.448: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.287386208s Jun 23 10:11:26.478: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.317347862s [1mSTEP[0m: Saw pod success Jun 23 10:11:26.478: INFO: Pod "pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1" satisfied condition "Succeeded or Failed" Jun 23 10:11:26.505: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:26.576: INFO: Waiting for pod pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1 to disappear Jun 23 10:11:26.638: INFO: Pod pod-projected-secrets-0fa47bf0-c0df-45da-b2b0-c117ae989bd1 no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:16.857 seconds][0m [sig-storage] Projected secret [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":230,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:27.792 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m updates the published spec when one version gets renamed [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":11,"skipped":195,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [32m• [SLOW TEST:8.796 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should support remote command execution over websockets [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":92,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-515cb37f-00a6-4747-8d1b-d84127f33f9a [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:21.470: INFO: Waiting up to 5m0s for pod "pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7" in namespace "secrets-34" to be "Succeeded or Failed" Jun 23 10:11:21.497: INFO: Pod "pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.318887ms Jun 23 10:11:23.521: INFO: Pod "pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050608975s Jun 23 10:11:25.562: INFO: Pod "pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091656482s Jun 23 10:11:27.587: INFO: Pod "pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11627249s Jun 23 10:11:29.627: INFO: Pod "pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156610785s [1mSTEP[0m: Saw pod success Jun 23 10:11:29.627: INFO: Pod "pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7" satisfied condition "Succeeded or Failed" Jun 23 10:11:29.681: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:29.837: INFO: Waiting for pod pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7 to disappear Jun 23 10:11:29.874: INFO: Pod pod-secrets-a8505b64-6e49-4952-85a4-e6bdd21399e7 no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.762 seconds][0m [sig-storage] Secrets [90mtest/e2e/common/storage/framework.go:23[0m should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":281,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 75 lines ... &Pod{ObjectMeta:{webserver-deployment-55df494869-xhwx8 webserver-deployment-55df494869- deployment-5011 1a2685a7-98d4-4085-b632-82b427217762 10387 0 2022-06-23 10:11:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 d339e554-16be-4b2c-b88a-c526cee38f9f 0xc0037033e0 0xc0037033e1}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d339e554-16be-4b2c-b88a-c526cee38f9f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 10:11:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7zfvq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zfvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-kn3q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.5,PodIP:100.96.4.194,StartTime:2022-06-23 10:11:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-23 10:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://688bf77516469f5317b8e25498105b07a68b068c161045a6dd30ae7d6178edc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.689: INFO: Pod "webserver-deployment-55df494869-zj6rh" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-zj6rh webserver-deployment-55df494869- deployment-5011 b3299edf-b3f5-43a0-a3b5-d37fe9fb4ef0 10416 0 2022-06-23 10:11:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 d339e554-16be-4b2c-b88a-c526cee38f9f 0xc0037035b0 0xc0037035b1}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d339e554-16be-4b2c-b88a-c526cee38f9f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 10:11:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-msrl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-msrl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-kn3q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.5,PodIP:100.96.4.227,StartTime:2022-06-23 10:11:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-23 10:11:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://cceb1eda2447ec1f6f9e1c7f1c039d9aba6bc67bebb98cb310c96050b5ed44fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.689: INFO: Pod "webserver-deployment-57ccb67bb8-2w59j" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-2w59j webserver-deployment-57ccb67bb8- deployment-5011 7603aadc-e2c7-4236-abc8-7b420812161f 10767 0 2022-06-23 10:11:28 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc003703780 0xc003703781}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rsbph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rsbph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-kn3q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.689: INFO: Pod "webserver-deployment-57ccb67bb8-67z6t" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-67z6t webserver-deployment-57ccb67bb8- deployment-5011 c81a13c2-f7f8-4346-840c-3094a53cb253 10809 0 2022-06-23 10:11:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc0037038e0 0xc0037038e1}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 10:11:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zgvv9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgvv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-kn3q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.5,PodIP:100.96.4.2,StartTime:2022-06-23 10:11:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.689: INFO: Pod "webserver-deployment-57ccb67bb8-b22t8" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-b22t8 webserver-deployment-57ccb67bb8- deployment-5011 00b2f0fa-9cc8-4b5b-b82f-aa5481d08836 10636 0 2022-06-23 10:11:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc003703ae0 0xc003703ae1}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sk47g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sk47g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j6c5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.689: INFO: Pod "webserver-deployment-57ccb67bb8-dnsvm" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-dnsvm webserver-deployment-57ccb67bb8- deployment-5011 7deb8c39-d130-498d-99ca-3bf184457708 10756 0 2022-06-23 10:11:28 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc003703c40 0xc003703c41}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vghmv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vghmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-djk0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.690: INFO: Pod "webserver-deployment-57ccb67bb8-dpg2j" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-dpg2j webserver-deployment-57ccb67bb8- deployment-5011 2693d356-012b-4eda-ac96-39d9ea8a3bc9 10888 0 2022-06-23 10:11:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc003703da0 0xc003703da1}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 10:11:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.2.248\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-75rct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75rct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j6c5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.16.2,PodIP:100.96.2.248,StartTime:2022-06-23 10:11:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.2.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.690: INFO: Pod "webserver-deployment-57ccb67bb8-jkssp" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-jkssp webserver-deployment-57ccb67bb8- deployment-5011 1bf66764-749e-47f8-aa28-749df6329d82 10775 0 2022-06-23 10:11:28 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc003703fa0 0xc003703fa1}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zfhsh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zfhsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-x977,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.690: INFO: Pod "webserver-deployment-57ccb67bb8-n6g7x" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-n6g7x webserver-deployment-57ccb67bb8- deployment-5011 1e5aa910-0c1e-4012-a174-a0b9f7ae772e 10794 0 2022-06-23 10:11:28 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc003584100 0xc003584101}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xcj56,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcj56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-djk0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 23 10:11:30.690: INFO: Pod "webserver-deployment-57ccb67bb8-nqqv4" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-nqqv4 webserver-deployment-57ccb67bb8- deployment-5011 1c9211aa-26f9-48f3-9fb7-eb59366de2a8 10786 0 2022-06-23 10:11:28 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 39f41ef8-2e38-45a3-8501-2bdbd5c209a9 0xc003584260 0xc003584261}] [] [{kube-controller-manager Update v1 2022-06-23 10:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39f41ef8-2e38-45a3-8501-2bdbd5c209a9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9ktdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9ktdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:nodes-us-west3-a-j6c5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 10:11:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} ... skipping 16 lines ... [32m• [SLOW TEST:15.156 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m deployment should support proportional scaling [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":12,"skipped":332,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:25.177: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test substitution in container's command Jun 23 10:11:25.400: INFO: Waiting up to 5m0s for pod "var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a" in namespace "var-expansion-3528" to be "Succeeded or Failed" Jun 23 10:11:25.425: INFO: Pod "var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.347362ms Jun 23 10:11:27.451: INFO: Pod "var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050224057s Jun 23 10:11:29.485: INFO: Pod "var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084614054s Jun 23 10:11:31.509: INFO: Pod "var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108904941s [1mSTEP[0m: Saw pod success Jun 23 10:11:31.510: INFO: Pod "var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a" satisfied condition "Succeeded or Failed" Jun 23 10:11:31.534: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:31.595: INFO: Waiting for pod var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a to disappear Jun 23 10:11:31.619: INFO: Pod var-expansion-0b7df560-803e-47e5-bd58-cc52d1feae8a no longer exists [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.493 seconds][0m [sig-node] Variable Expansion [90mtest/e2e/common/node/framework.go:23[0m should allow substituting values in a container's command [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":192,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":66,"failed":0} [BeforeEach] [sig-node] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:23.928: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: creating secret secrets-8752/secret-test-c5635fe2-0ec3-4c6e-a62a-9dcbefe92082 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:24.170: INFO: Waiting up to 5m0s for pod "pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43" in namespace "secrets-8752" to be "Succeeded or Failed" Jun 23 10:11:24.235: INFO: Pod "pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43": Phase="Pending", Reason="", readiness=false. Elapsed: 64.898445ms Jun 23 10:11:26.272: INFO: Pod "pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101252835s Jun 23 10:11:28.296: INFO: Pod "pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125888454s Jun 23 10:11:30.327: INFO: Pod "pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157042889s Jun 23 10:11:32.352: INFO: Pod "pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.181277507s [1mSTEP[0m: Saw pod success Jun 23 10:11:32.352: INFO: Pod "pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43" satisfied condition "Succeeded or Failed" Jun 23 10:11:32.374: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43 container env-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:32.436: INFO: Waiting for pod pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43 to disappear Jun 23 10:11:32.459: INFO: Pod pod-configmaps-f86a363b-ba8a-4f16-b91a-c31919a23c43 no longer exists [AfterEach] [sig-node] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.583 seconds][0m [sig-node] Secrets [90mtest/e2e/common/node/framework.go:23[0m should be consumable via the environment [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":66,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Subpath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 5 lines ... test/e2e/storage/subpath.go:40 [1mSTEP[0m: Setting up data [It] should support subpaths with downward pod [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating pod pod-subpath-test-downwardapi-qqvs [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 10:11:06.354: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qqvs" in namespace "subpath-4482" to be "Succeeded or Failed" Jun 23 10:11:06.385: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Pending", Reason="", readiness=false. Elapsed: 30.786559ms Jun 23 10:11:08.435: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080276738s Jun 23 10:11:10.462: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107592837s Jun 23 10:11:12.532: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Running", Reason="", readiness=true. Elapsed: 6.177664604s Jun 23 10:11:14.558: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Running", Reason="", readiness=true. Elapsed: 8.203242351s Jun 23 10:11:16.585: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Running", Reason="", readiness=true. Elapsed: 10.230764859s ... skipping 3 lines ... Jun 23 10:11:24.798: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Running", Reason="", readiness=true. Elapsed: 18.443778279s Jun 23 10:11:26.845: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Running", Reason="", readiness=true. Elapsed: 20.490089596s Jun 23 10:11:28.876: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Running", Reason="", readiness=true. Elapsed: 22.521989521s Jun 23 10:11:30.902: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Running", Reason="", readiness=true. Elapsed: 24.547047927s Jun 23 10:11:32.926: INFO: Pod "pod-subpath-test-downwardapi-qqvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.571870682s [1mSTEP[0m: Saw pod success Jun 23 10:11:32.926: INFO: Pod "pod-subpath-test-downwardapi-qqvs" satisfied condition "Succeeded or Failed" Jun 23 10:11:32.951: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-subpath-test-downwardapi-qqvs container test-container-subpath-downwardapi-qqvs: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:33.022: INFO: Waiting for pod pod-subpath-test-downwardapi-qqvs to disappear Jun 23 10:11:33.046: INFO: Pod pod-subpath-test-downwardapi-qqvs no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-downwardapi-qqvs Jun 23 10:11:33.046: INFO: Deleting pod "pod-subpath-test-downwardapi-qqvs" in namespace "subpath-4482" ... skipping 8 lines ... [90mtest/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90mtest/e2e/storage/subpath.go:36[0m should support subpaths with downward pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":-1,"completed":10,"skipped":82,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":16,"skipped":320,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:09.200: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename crd-publish-openapi [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 14 lines ... [32m• [SLOW TEST:25.269 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m removes definition from spec when one version gets changed to not be served [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":17,"skipped":320,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 34 lines ... [90mtest/e2e/apimachinery/framework.go:23[0m listing mutating webhooks should work [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":8,"skipped":104,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 34 lines ... [32m• [SLOW TEST:31.398 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should be able to change the type from ClusterIP to ExternalName [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":9,"skipped":196,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 40 lines ... [32m• [SLOW TEST:18.671 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m RecreateDeployment should delete old pods and create new ones [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":220,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] CSIStorageCapacity test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:39.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "csistoragecapacity-7184" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]","total":-1,"completed":7,"skipped":222,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:37 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:67 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl [1mSTEP[0m: Watching for error events or started pod [1mSTEP[0m: Waiting for pod completion [1mSTEP[0m: Checking that the pod succeeded [1mSTEP[0m: Getting logs from the pod [1mSTEP[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:12.574 seconds][0m [sig-node] Sysctls [LinuxOnly] [NodeConformance] [90mtest/e2e/common/node/framework.go:23[0m should support sysctls [MinimumKubeletVersion:1.21] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":8,"skipped":298,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-eafbc53a-eadb-4c2d-b56e-a87804396f5f [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:31.955: INFO: Waiting up to 5m0s for pod "pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e" in namespace "secrets-1180" to be "Succeeded or Failed" Jun 23 10:11:31.986: INFO: Pod "pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.434926ms Jun 23 10:11:34.011: INFO: Pod "pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055870539s Jun 23 10:11:36.040: INFO: Pod "pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084986147s Jun 23 10:11:38.066: INFO: Pod "pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110823185s Jun 23 10:11:40.107: INFO: Pod "pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151650305s [1mSTEP[0m: Saw pod success Jun 23 10:11:40.107: INFO: Pod "pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e" satisfied condition "Succeeded or Failed" Jun 23 10:11:40.135: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:40.202: INFO: Waiting for pod pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e to disappear Jun 23 10:11:40.226: INFO: Pod pod-secrets-3d5f830e-86fc-4664-93e4-b9ffc2ccc91e no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.573 seconds][0m [sig-storage] Secrets [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":198,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [32m• [SLOW TEST:16.492 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a secret. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":4,"skipped":96,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Job test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... [32m• [SLOW TEST:12.442 seconds][0m [sig-apps] Job [90mtest/e2e/apps/framework.go:23[0m should apply changes to a job status [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","total":-1,"completed":12,"skipped":307,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:26.351: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0666 on tmpfs Jun 23 10:11:26.599: INFO: Waiting up to 5m0s for pod "pod-3388d752-acdc-42e1-97f7-c5065d064459" in namespace "emptydir-49" to be "Succeeded or Failed" Jun 23 10:11:26.658: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 58.446967ms Jun 23 10:11:28.686: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087176119s Jun 23 10:11:30.723: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124201347s Jun 23 10:11:32.751: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152066707s Jun 23 10:11:34.776: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17672887s Jun 23 10:11:36.802: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203068678s Jun 23 10:11:38.828: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 12.228683788s Jun 23 10:11:40.854: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Pending", Reason="", readiness=false. Elapsed: 14.254428043s Jun 23 10:11:42.879: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.280103922s [1mSTEP[0m: Saw pod success Jun 23 10:11:42.879: INFO: Pod "pod-3388d752-acdc-42e1-97f7-c5065d064459" satisfied condition "Succeeded or Failed" Jun 23 10:11:42.904: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-3388d752-acdc-42e1-97f7-c5065d064459 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:42.960: INFO: Waiting for pod pod-3388d752-acdc-42e1-97f7-c5065d064459 to disappear Jun 23 10:11:42.984: INFO: Pod pod-3388d752-acdc-42e1-97f7-c5065d064459 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:16.688 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":170,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:30.793: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs Jun 23 10:11:31.003: INFO: Waiting up to 5m0s for pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89" in namespace "emptydir-4039" to be "Succeeded or Failed" Jun 23 10:11:31.030: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89": Phase="Pending", Reason="", readiness=false. Elapsed: 26.599288ms Jun 23 10:11:33.056: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052248445s Jun 23 10:11:35.083: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0795457s Jun 23 10:11:37.110: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106677026s Jun 23 10:11:39.135: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131603718s Jun 23 10:11:41.161: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157795012s Jun 23 10:11:43.191: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.187241725s [1mSTEP[0m: Saw pod success Jun 23 10:11:43.191: INFO: Pod "pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89" satisfied condition "Succeeded or Failed" Jun 23 10:11:43.216: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:43.271: INFO: Waiting for pod pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89 to disappear Jun 23 10:11:43.299: INFO: Pod pod-5a1f9ee7-5679-4c92-8f66-3ecdf612ee89 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:12.560 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":335,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:43.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-2252" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":13,"skipped":309,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 33 lines ... [32m• [SLOW TEST:30.833 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should be able to change the type from NodePort to ExternalName [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":11,"skipped":228,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:39.629: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Jun 23 10:11:39.873: INFO: Waiting up to 5m0s for pod "pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e" in namespace "emptydir-7757" to be "Succeeded or Failed" Jun 23 10:11:39.899: INFO: Pod "pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.316103ms Jun 23 10:11:41.924: INFO: Pod "pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05092011s Jun 23 10:11:43.968: INFO: Pod "pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094711996s Jun 23 10:11:45.997: INFO: Pod "pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123737031s [1mSTEP[0m: Saw pod success Jun 23 10:11:45.997: INFO: Pod "pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e" satisfied condition "Succeeded or Failed" Jun 23 10:11:46.024: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:46.096: INFO: Waiting for pod pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e to disappear Jun 23 10:11:46.122: INFO: Pod pod-2ba434da-022a-44bf-bbe8-f2cdb2cd630e no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.555 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":308,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Discovery test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 90 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:46.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "discovery-8491" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":12,"skipped":250,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... Jun 23 10:11:25.896: INFO: Unable to read jessie_udp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:25.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:25.968: INFO: Unable to read jessie_udp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:25.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:26.035: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:26.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:26.242: INFO: Lookups using dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-167 wheezy_tcp@dns-test-service.dns-167 wheezy_udp@dns-test-service.dns-167.svc wheezy_tcp@dns-test-service.dns-167.svc wheezy_udp@_http._tcp.dns-test-service.dns-167.svc wheezy_tcp@_http._tcp.dns-test-service.dns-167.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-167 jessie_tcp@dns-test-service.dns-167 jessie_udp@dns-test-service.dns-167.svc jessie_tcp@dns-test-service.dns-167.svc jessie_udp@_http._tcp.dns-test-service.dns-167.svc jessie_tcp@_http._tcp.dns-test-service.dns-167.svc] Jun 23 10:11:31.315: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.360: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.411: INFO: Unable to read wheezy_udp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.449: INFO: Unable to read wheezy_tcp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.474: INFO: Unable to read wheezy_udp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) ... skipping 5 lines ... Jun 23 10:11:31.815: INFO: Unable to read jessie_udp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.850: INFO: Unable to read jessie_tcp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.897: INFO: Unable to read jessie_udp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.925: INFO: Unable to read jessie_tcp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.953: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:31.988: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:32.108: INFO: Lookups using dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-167 wheezy_tcp@dns-test-service.dns-167 wheezy_udp@dns-test-service.dns-167.svc wheezy_tcp@dns-test-service.dns-167.svc wheezy_udp@_http._tcp.dns-test-service.dns-167.svc wheezy_tcp@_http._tcp.dns-test-service.dns-167.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-167 jessie_tcp@dns-test-service.dns-167 jessie_udp@dns-test-service.dns-167.svc jessie_tcp@dns-test-service.dns-167.svc jessie_udp@_http._tcp.dns-test-service.dns-167.svc jessie_tcp@_http._tcp.dns-test-service.dns-167.svc] Jun 23 10:11:36.293: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:36.357: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:36.394: INFO: Unable to read wheezy_udp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:36.431: INFO: Unable to read wheezy_tcp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:36.538: INFO: Unable to read wheezy_udp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) ... skipping 5 lines ... Jun 23 10:11:36.893: INFO: Unable to read jessie_udp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:36.955: INFO: Unable to read jessie_tcp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:36.984: INFO: Unable to read jessie_udp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:37.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:37.064: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:37.090: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:37.303: INFO: Lookups using dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-167 wheezy_tcp@dns-test-service.dns-167 wheezy_udp@dns-test-service.dns-167.svc wheezy_tcp@dns-test-service.dns-167.svc wheezy_udp@_http._tcp.dns-test-service.dns-167.svc wheezy_tcp@_http._tcp.dns-test-service.dns-167.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-167 jessie_tcp@dns-test-service.dns-167 jessie_udp@dns-test-service.dns-167.svc jessie_tcp@dns-test-service.dns-167.svc jessie_udp@_http._tcp.dns-test-service.dns-167.svc jessie_tcp@_http._tcp.dns-test-service.dns-167.svc] Jun 23 10:11:41.269: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.299: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.325: INFO: Unable to read wheezy_udp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.351: INFO: Unable to read wheezy_tcp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) ... skipping 5 lines ... Jun 23 10:11:41.651: INFO: Unable to read jessie_udp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.677: INFO: Unable to read jessie_tcp@dns-test-service.dns-167 from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.710: INFO: Unable to read jessie_udp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.739: INFO: Unable to read jessie_tcp@dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.769: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.805: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-167.svc from pod dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2: the server could not find the requested resource (get pods dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2) Jun 23 10:11:41.921: INFO: Lookups using dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-167 wheezy_tcp@dns-test-service.dns-167 wheezy_udp@dns-test-service.dns-167.svc wheezy_tcp@dns-test-service.dns-167.svc wheezy_udp@_http._tcp.dns-test-service.dns-167.svc wheezy_tcp@_http._tcp.dns-test-service.dns-167.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-167 jessie_tcp@dns-test-service.dns-167 jessie_udp@dns-test-service.dns-167.svc jessie_tcp@dns-test-service.dns-167.svc jessie_udp@_http._tcp.dns-test-service.dns-167.svc jessie_tcp@_http._tcp.dns-test-service.dns-167.svc] Jun 23 10:11:46.873: INFO: DNS probes using dns-167/dns-test-80b24f7a-9248-4c23-b238-2d747536a9d2 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test service [1mSTEP[0m: deleting the test headless service ... skipping 6 lines ... [32m• [SLOW TEST:34.007 seconds][0m [sig-network] DNS [90mtest/e2e/network/common/framework.go:23[0m should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":322,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-73e5ca1d-26b1-476c-ba7b-c8aa0bfeda07 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:11:42.648: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b" in namespace "projected-9026" to be "Succeeded or Failed" Jun 23 10:11:42.671: INFO: Pod "pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.468167ms Jun 23 10:11:44.699: INFO: Pod "pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051291463s Jun 23 10:11:46.724: INFO: Pod "pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07635791s Jun 23 10:11:48.749: INFO: Pod "pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100760527s [1mSTEP[0m: Saw pod success Jun 23 10:11:48.749: INFO: Pod "pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b" satisfied condition "Succeeded or Failed" Jun 23 10:11:48.773: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:48.834: INFO: Waiting for pod pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b to disappear Jun 23 10:11:48.859: INFO: Pod pod-projected-configmaps-de4fcb12-2935-433d-acfe-90c93db75e9b no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.507 seconds][0m [sig-storage] Projected configMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":100,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 74 lines ... [32m• [SLOW TEST:12.639 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m should validate Deployment Status endpoints [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":10,"skipped":221,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Subpath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 5 lines ... test/e2e/storage/subpath.go:40 [1mSTEP[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating pod pod-subpath-test-configmap-nt7h [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 10:11:21.861: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nt7h" in namespace "subpath-817" to be "Succeeded or Failed" Jun 23 10:11:21.885: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Pending", Reason="", readiness=false. Elapsed: 23.697742ms Jun 23 10:11:23.910: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04819977s Jun 23 10:11:25.933: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072090297s Jun 23 10:11:27.956: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094924458s Jun 23 10:11:30.023: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Running", Reason="", readiness=true. Elapsed: 8.16165485s Jun 23 10:11:32.047: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Running", Reason="", readiness=true. Elapsed: 10.185859989s ... skipping 4 lines ... Jun 23 10:11:42.188: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Running", Reason="", readiness=true. Elapsed: 20.326764844s Jun 23 10:11:44.218: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Running", Reason="", readiness=true. Elapsed: 22.356401446s Jun 23 10:11:46.241: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Running", Reason="", readiness=true. Elapsed: 24.379134501s Jun 23 10:11:48.266: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Running", Reason="", readiness=true. Elapsed: 26.404262521s Jun 23 10:11:50.290: INFO: Pod "pod-subpath-test-configmap-nt7h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.428715392s [1mSTEP[0m: Saw pod success Jun 23 10:11:50.290: INFO: Pod "pod-subpath-test-configmap-nt7h" satisfied condition "Succeeded or Failed" Jun 23 10:11:50.313: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-subpath-test-configmap-nt7h container test-container-subpath-configmap-nt7h: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:50.392: INFO: Waiting for pod pod-subpath-test-configmap-nt7h to disappear Jun 23 10:11:50.414: INFO: Pod pod-subpath-test-configmap-nt7h no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-configmap-nt7h Jun 23 10:11:50.414: INFO: Deleting pod "pod-subpath-test-configmap-nt7h" in namespace "subpath-817" ... skipping 8 lines ... [90mtest/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90mtest/e2e/storage/subpath.go:36[0m should support subpaths with configmap pod with mountPath of existing file [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]","total":-1,"completed":8,"skipped":108,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... [1mSTEP[0m: Create set of pods Jun 23 10:11:20.234: INFO: created test-pod-1 Jun 23 10:11:20.269: INFO: created test-pod-2 Jun 23 10:11:20.301: INFO: created test-pod-3 [1mSTEP[0m: waiting for all 3 pods to be running Jun 23 10:11:20.301: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-6858' to be running and ready Jun 23 10:11:20.381: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:20.381: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:20.381: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:20.381: INFO: 0 / 3 pods in namespace 'pods-6858' are running and ready (0 seconds elapsed) Jun 23 10:11:20.381: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:20.381: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:20.381: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:20.381: INFO: test-pod-2 nodes-us-west3-a-x977 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:20.381: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:20.381: INFO: Jun 23 10:11:22.461: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:22.461: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:22.461: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:22.461: INFO: 0 / 3 pods in namespace 'pods-6858' are running and ready (2 seconds elapsed) Jun 23 10:11:22.461: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:22.461: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:22.461: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:22.461: INFO: test-pod-2 nodes-us-west3-a-x977 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:22.461: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:22.461: INFO: Jun 23 10:11:24.518: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:24.519: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:24.519: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:24.519: INFO: 0 / 3 pods in namespace 'pods-6858' are running and ready (4 seconds elapsed) Jun 23 10:11:24.519: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:24.519: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:24.519: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:24.519: INFO: test-pod-2 nodes-us-west3-a-x977 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:24.519: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:24.519: INFO: Jun 23 10:11:26.469: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:26.469: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:26.469: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:26.469: INFO: 0 / 3 pods in namespace 'pods-6858' are running and ready (6 seconds elapsed) Jun 23 10:11:26.469: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:26.469: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:26.469: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:26.469: INFO: test-pod-2 nodes-us-west3-a-x977 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:26.469: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:26.469: INFO: Jun 23 10:11:28.497: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:28.497: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:28.497: INFO: 1 / 3 pods in namespace 'pods-6858' are running and ready (8 seconds elapsed) Jun 23 10:11:28.498: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:28.498: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:28.498: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:28.498: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:28.498: INFO: Jun 23 10:11:30.496: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:30.496: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:30.496: INFO: 1 / 3 pods in namespace 'pods-6858' are running and ready (10 seconds elapsed) Jun 23 10:11:30.496: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:30.496: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:30.496: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:30.496: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:30.496: INFO: Jun 23 10:11:32.457: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:32.457: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:32.457: INFO: 1 / 3 pods in namespace 'pods-6858' are running and ready (12 seconds elapsed) Jun 23 10:11:32.457: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:32.457: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:32.457: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:32.457: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:32.457: INFO: Jun 23 10:11:34.460: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:34.460: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jun 23 10:11:34.460: INFO: 1 / 3 pods in namespace 'pods-6858' are running and ready (14 seconds elapsed) Jun 23 10:11:34.460: INFO: expected 0 pod replicas in namespace 'pods-6858', 0 are Running and Ready. Jun 23 10:11:34.460: INFO: POD NODE PHASE GRACE CONDITIONS Jun 23 10:11:34.460: INFO: test-pod-1 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:34.460: INFO: test-pod-3 nodes-us-west3-a-j6c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 10:11:20 +0000 UTC }] Jun 23 10:11:34.460: INFO: ... skipping 23 lines ... [32m• [SLOW TEST:30.705 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should delete a collection of pods [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":15,"skipped":279,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:11:47.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff" in namespace "downward-api-7762" to be "Succeeded or Failed" Jun 23 10:11:47.390: INFO: Pod "downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff": Phase="Pending", Reason="", readiness=false. Elapsed: 27.478409ms Jun 23 10:11:49.415: INFO: Pod "downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052640616s Jun 23 10:11:51.492: INFO: Pod "downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12967852s [1mSTEP[0m: Saw pod success Jun 23 10:11:51.492: INFO: Pod "downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff" satisfied condition "Succeeded or Failed" Jun 23 10:11:51.561: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:51.756: INFO: Waiting for pod downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff to disappear Jun 23 10:11:51.797: INFO: Pod downwardapi-volume-07ae1147-929d-4508-a26d-1247646645ff no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jun 23 10:11:51.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-7762" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":340,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 40 lines ... [32m• [SLOW TEST:11.016 seconds][0m [sig-api-machinery] Garbage collector [90mtest/e2e/apimachinery/framework.go:23[0m should delete pods created by rc when not orphaning [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":11,"skipped":338,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Subpath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 5 lines ... test/e2e/storage/subpath.go:40 [1mSTEP[0m: Setting up data [It] should support subpaths with configmap pod [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating pod pod-subpath-test-configmap-l2vn [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 10:11:25.237: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l2vn" in namespace "subpath-4949" to be "Succeeded or Failed" Jun 23 10:11:25.262: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Pending", Reason="", readiness=false. Elapsed: 24.36775ms Jun 23 10:11:27.284: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046581362s Jun 23 10:11:29.358: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120970705s Jun 23 10:11:31.383: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145856466s Jun 23 10:11:33.406: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168367854s Jun 23 10:11:35.430: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Running", Reason="", readiness=true. Elapsed: 10.192255519s ... skipping 3 lines ... Jun 23 10:11:43.537: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Running", Reason="", readiness=true. Elapsed: 18.299172941s Jun 23 10:11:45.561: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Running", Reason="", readiness=true. Elapsed: 20.32375502s Jun 23 10:11:47.585: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Running", Reason="", readiness=true. Elapsed: 22.347266356s Jun 23 10:11:49.610: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Running", Reason="", readiness=false. Elapsed: 24.372439802s Jun 23 10:11:51.652: INFO: Pod "pod-subpath-test-configmap-l2vn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.414430183s [1mSTEP[0m: Saw pod success Jun 23 10:11:51.652: INFO: Pod "pod-subpath-test-configmap-l2vn" satisfied condition "Succeeded or Failed" Jun 23 10:11:51.702: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-subpath-test-configmap-l2vn container test-container-subpath-configmap-l2vn: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:51.803: INFO: Waiting for pod pod-subpath-test-configmap-l2vn to disappear Jun 23 10:11:51.827: INFO: Pod pod-subpath-test-configmap-l2vn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-configmap-l2vn Jun 23 10:11:51.827: INFO: Deleting pod "pod-subpath-test-configmap-l2vn" in namespace "subpath-4949" ... skipping 8 lines ... [90mtest/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90mtest/e2e/storage/subpath.go:36[0m should support subpaths with configmap pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":-1,"completed":13,"skipped":243,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:39.248: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test substitution in volume subpath Jun 23 10:11:39.446: INFO: Waiting up to 5m0s for pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca" in namespace "var-expansion-5977" to be "Succeeded or Failed" Jun 23 10:11:39.470: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 23.882178ms Jun 23 10:11:41.495: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048895465s Jun 23 10:11:43.521: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074436967s Jun 23 10:11:45.548: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102073153s Jun 23 10:11:47.574: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127491952s Jun 23 10:11:49.599: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152613155s Jun 23 10:11:51.652: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.205901424s [1mSTEP[0m: Saw pod success Jun 23 10:11:51.652: INFO: Pod "var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca" satisfied condition "Succeeded or Failed" Jun 23 10:11:51.702: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca container dapi-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:51.816: INFO: Waiting for pod var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca to disappear Jun 23 10:11:51.850: INFO: Pod var-expansion-d7d6ede9-eaa6-4443-8457-202474de30ca no longer exists [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/node/framework.go:23[0m should allow substituting values in a volume subpath [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":8,"skipped":230,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 41 lines ... [32m• [SLOW TEST:24.507 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should be able to change the type from ExternalName to ClusterIP [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":12,"skipped":243,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:7.086 seconds][0m [sig-storage] Downward API volume [90mtest/e2e/common/storage/framework.go:23[0m should update labels on modification [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":252,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... Jun 23 10:11:28.326: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:28.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:28.594: INFO: Unable to read jessie_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:28.627: INFO: Unable to read jessie_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:28.656: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:28.687: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:28.818: INFO: Lookups using dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5 failed for: [wheezy_udp@dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_udp@dns-test-service.dns-6746.svc.cluster.local jessie_tcp@dns-test-service.dns-6746.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local] Jun 23 10:11:33.848: INFO: Unable to read wheezy_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:33.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:33.903: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:33.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:34.118: INFO: Unable to read jessie_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:34.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:34.196: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:34.326: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:34.495: INFO: Lookups using dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5 failed for: [wheezy_udp@dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_udp@dns-test-service.dns-6746.svc.cluster.local jessie_tcp@dns-test-service.dns-6746.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local] Jun 23 10:11:38.854: INFO: Unable to read wheezy_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:38.892: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:38.920: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:38.968: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:39.098: INFO: Unable to read jessie_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:39.122: INFO: Unable to read jessie_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:39.150: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:39.175: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:39.270: INFO: Lookups using dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5 failed for: [wheezy_udp@dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_udp@dns-test-service.dns-6746.svc.cluster.local jessie_tcp@dns-test-service.dns-6746.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local] Jun 23 10:11:43.847: INFO: Unable to read wheezy_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:43.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:43.901: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:43.929: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:44.080: INFO: Unable to read jessie_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:44.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:44.143: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:44.170: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:44.314: INFO: Lookups using dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5 failed for: [wheezy_udp@dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_udp@dns-test-service.dns-6746.svc.cluster.local jessie_tcp@dns-test-service.dns-6746.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local] Jun 23 10:11:48.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:48.867: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:48.895: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:48.931: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:49.077: INFO: Unable to read jessie_udp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:49.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:49.131: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:49.177: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local from pod dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5: the server could not find the requested resource (get pods dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5) Jun 23 10:11:49.283: INFO: Lookups using dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5 failed for: [wheezy_udp@dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@dns-test-service.dns-6746.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_udp@dns-test-service.dns-6746.svc.cluster.local jessie_tcp@dns-test-service.dns-6746.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6746.svc.cluster.local] Jun 23 10:11:54.250: INFO: DNS probes using dns-6746/dns-test-c99565a4-4e67-40b3-ba55-1159f65694d5 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test service [1mSTEP[0m: deleting the test headless service ... skipping 6 lines ... [32m• [SLOW TEST:36.542 seconds][0m [sig-network] DNS [90mtest/e2e/network/common/framework.go:23[0m should provide DNS for services [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":11,"skipped":203,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Lease test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 7 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:54.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "lease-test-7343" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":14,"skipped":336,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Job test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:20.299 seconds][0m [sig-apps] Job [90mtest/e2e/apps/framework.go:23[0m should create pods for an Indexed job with completion indexes and specified hostname [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]","total":-1,"completed":18,"skipped":400,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 49 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:55.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-9879" for this suite. [32m•[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":9,"skipped":120,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:55.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-8897" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":12,"skipped":217,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":196,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:43.444: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:11:43.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6" in namespace "projected-648" to be "Succeeded or Failed" Jun 23 10:11:43.705: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.015178ms Jun 23 10:11:45.730: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063056171s Jun 23 10:11:47.755: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088266631s Jun 23 10:11:49.781: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114398816s Jun 23 10:11:51.823: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156222341s Jun 23 10:11:53.852: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184742139s Jun 23 10:11:55.878: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.210927653s [1mSTEP[0m: Saw pod success Jun 23 10:11:55.878: INFO: Pod "downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6" satisfied condition "Succeeded or Failed" Jun 23 10:11:55.903: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:55.960: INFO: Waiting for pod downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6 to disappear Jun 23 10:11:55.984: INFO: Pod downwardapi-volume-a7b6d8f9-6712-454e-a9cd-a3fb1c31faa6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:12.595 seconds][0m [sig-storage] Projected downwardAPI [90mtest/e2e/common/storage/framework.go:23[0m should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":196,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Runtime test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 25 lines ... [90mtest/e2e/common/node/runtime.go:136[0m should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":289,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:44.067: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs Jun 23 10:11:44.283: INFO: Waiting up to 5m0s for pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3" in namespace "emptydir-2465" to be "Succeeded or Failed" Jun 23 10:11:44.316: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3": Phase="Pending", Reason="", readiness=false. Elapsed: 33.128284ms Jun 23 10:11:46.340: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056697982s Jun 23 10:11:48.370: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086765011s Jun 23 10:11:50.398: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114433717s Jun 23 10:11:52.430: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146915199s Jun 23 10:11:54.455: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17190923s Jun 23 10:11:56.482: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.198864625s [1mSTEP[0m: Saw pod success Jun 23 10:11:56.482: INFO: Pod "pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3" satisfied condition "Succeeded or Failed" Jun 23 10:11:56.511: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:56.608: INFO: Waiting for pod pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3 to disappear Jun 23 10:11:56.648: INFO: Pod pod-4d79d0a5-a97d-4a9a-b11c-707e448293a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:12.665 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":329,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-map-701651ee-aa9b-44f8-b535-fd9e182a36b0 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:46.438: INFO: Waiting up to 5m0s for pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d" in namespace "secrets-9153" to be "Succeeded or Failed" Jun 23 10:11:46.462: INFO: Pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.717102ms Jun 23 10:11:48.488: INFO: Pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049687646s Jun 23 10:11:50.515: INFO: Pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076662704s Jun 23 10:11:52.541: INFO: Pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102412665s Jun 23 10:11:54.569: INFO: Pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130899419s Jun 23 10:11:56.608: INFO: Pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169865108s [1mSTEP[0m: Saw pod success Jun 23 10:11:56.608: INFO: Pod "pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d" satisfied condition "Succeeded or Failed" Jun 23 10:11:56.648: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:56.759: INFO: Waiting for pod pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d to disappear Jun 23 10:11:56.793: INFO: Pod pod-secrets-0ae75767-2280-4098-a7f5-7189273e987d no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:10.754 seconds][0m [sig-storage] Secrets [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":312,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:57.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "certificates-4232" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":13,"skipped":220,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [90mtest/e2e/common/node/lifecycle_hook.go:46[0m should execute poststart http hook properly [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":145,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:52.006: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs Jun 23 10:11:52.222: INFO: Waiting up to 5m0s for pod "pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b" in namespace "emptydir-123" to be "Succeeded or Failed" Jun 23 10:11:52.248: INFO: Pod "pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.943815ms Jun 23 10:11:54.271: INFO: Pod "pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048667639s Jun 23 10:11:56.298: INFO: Pod "pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075708234s Jun 23 10:11:58.320: INFO: Pod "pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097789054s [1mSTEP[0m: Saw pod success Jun 23 10:11:58.320: INFO: Pod "pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b" satisfied condition "Succeeded or Failed" Jun 23 10:11:58.349: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:11:58.422: INFO: Waiting for pod pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b to disappear Jun 23 10:11:58.450: INFO: Pod pod-d7b29e38-aa7f-4a78-a092-4ed76c531d0b no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.504 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":262,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [32m• [SLOW TEST:6.966 seconds][0m [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should be able to convert from CR v1 to CR v2 [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":17,"skipped":342,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] PodTemplates test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... test/e2e/framework/framework.go:188 Jun 23 10:11:58.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-98" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should replace a pod template [Conformance]","total":-1,"completed":15,"skipped":268,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 69 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Update Demo [90mtest/e2e/kubectl/kubectl.go:295[0m should create and stop a replication controller [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":12,"skipped":339,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 8 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:01.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-7814" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":7,"skipped":146,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name secret-test-35b358a1-b8f5-4801-9d84-c4eb352ee1ed [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:53.915: INFO: Waiting up to 5m0s for pod "pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f" in namespace "secrets-969" to be "Succeeded or Failed" Jun 23 10:11:53.942: INFO: Pod "pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.317891ms Jun 23 10:11:55.968: INFO: Pod "pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053578655s Jun 23 10:11:57.995: INFO: Pod "pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080560185s Jun 23 10:12:00.044: INFO: Pod "pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129154381s Jun 23 10:12:02.109: INFO: Pod "pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.193962277s [1mSTEP[0m: Saw pod success Jun 23 10:12:02.109: INFO: Pod "pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f" satisfied condition "Succeeded or Failed" Jun 23 10:12:02.168: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:02.252: INFO: Waiting for pod pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f to disappear Jun 23 10:12:02.280: INFO: Pod pod-secrets-bc60c7f0-1348-4e94-82b4-b3d7f664db8f no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.645 seconds][0m [sig-storage] Secrets [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":264,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:7.452 seconds][0m [sig-apps] ReplicaSet [90mtest/e2e/apps/framework.go:23[0m should serve a basic image on each replica with a public image [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":10,"skipped":124,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 38 lines ... [32m• [SLOW TEST:98.034 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":82,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 36 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:04.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-690" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":11,"skipped":157,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 69 lines ... [90mtest/e2e/common/network/networking.go:32[0m should function for intra-pod communication: udp [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide podname only [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:11:59.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7" in namespace "downward-api-5417" to be "Succeeded or Failed" Jun 23 10:11:59.309: INFO: Pod "downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.32147ms Jun 23 10:12:01.336: INFO: Pod "downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054364214s Jun 23 10:12:03.360: INFO: Pod "downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078554665s Jun 23 10:12:05.408: INFO: Pod "downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125766767s [1mSTEP[0m: Saw pod success Jun 23 10:12:05.408: INFO: Pod "downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7" satisfied condition "Succeeded or Failed" Jun 23 10:12:05.430: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:05.562: INFO: Waiting for pod downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7 to disappear Jun 23 10:12:05.598: INFO: Pod downwardapi-volume-f30f0781-0dbc-4dd7-9f0a-5741456dc9b7 no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.573 seconds][0m [sig-storage] Downward API volume [90mtest/e2e/common/storage/framework.go:23[0m should provide podname only [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":294,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Runtime test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... [90mtest/e2e/common/node/runtime.go:43[0m on terminated container [90mtest/e2e/common/node/runtime.go:136[0m should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":330,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Runtime test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... [90mtest/e2e/common/node/runtime.go:43[0m on terminated container [90mtest/e2e/common/node/runtime.go:136[0m should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":185,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context test/e2e/common/node/security_context.go:48 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jun 23 10:11:55.366: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17" in namespace "security-context-test-7440" to be "Succeeded or Failed" Jun 23 10:11:55.404: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17": Phase="Pending", Reason="", readiness=false. Elapsed: 38.000186ms Jun 23 10:11:57.428: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062535566s Jun 23 10:11:59.454: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087953041s Jun 23 10:12:01.480: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113816774s Jun 23 10:12:03.502: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136541316s Jun 23 10:12:05.543: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17": Phase="Pending", Reason="", readiness=false. Elapsed: 10.176804494s Jun 23 10:12:07.566: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.200586854s Jun 23 10:12:07.566: INFO: Pod "alpine-nnp-false-9b83460f-8e55-49ff-afec-3e488303bd17" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:188 Jun 23 10:12:07.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-7440" for this suite. ... skipping 13 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating secret with name projected-secret-test-bdd6dcbe-a304-4f4d-92e0-e11d6dceba91 [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:11:57.516: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e" in namespace "projected-6989" to be "Succeeded or Failed" Jun 23 10:11:57.555: INFO: Pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.013926ms Jun 23 10:11:59.586: INFO: Pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06985517s Jun 23 10:12:01.613: INFO: Pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097122011s Jun 23 10:12:03.640: INFO: Pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123790777s Jun 23 10:12:05.668: INFO: Pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e": Phase="Running", Reason="", readiness=true. Elapsed: 8.151950376s Jun 23 10:12:07.701: INFO: Pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184791213s [1mSTEP[0m: Saw pod success Jun 23 10:12:07.701: INFO: Pod "pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e" satisfied condition "Succeeded or Failed" Jun 23 10:12:07.725: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:07.788: INFO: Waiting for pod pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e to disappear Jun 23 10:12:07.818: INFO: Pod pod-projected-secrets-b64b75c6-9eb5-48f1-a47c-38210cb7f94e no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:10.882 seconds][0m [sig-storage] Projected secret [90mtest/e2e/common/storage/framework.go:23[0m should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":316,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 26 lines ... [32m• [SLOW TEST:16.788 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with best effort scope. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":9,"skipped":243,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Watchers test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 29 lines ... [32m• [SLOW TEST:10.642 seconds][0m [sig-api-machinery] Watchers [90mtest/e2e/apimachinery/framework.go:23[0m should observe an object deletion if it stops meeting the requirements of the selector [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":13,"skipped":373,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [32m• [SLOW TEST:9.067 seconds][0m [sig-storage] Downward API volume [90mtest/e2e/common/storage/framework.go:23[0m should update annotations on modification [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":303,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide container's cpu request [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:12:07.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a" in namespace "projected-3130" to be "Succeeded or Failed" Jun 23 10:12:07.041: INFO: Pod "downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.580145ms Jun 23 10:12:09.082: INFO: Pod "downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064544125s Jun 23 10:12:11.107: INFO: Pod "downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089998106s Jun 23 10:12:13.159: INFO: Pod "downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14231631s [1mSTEP[0m: Saw pod success Jun 23 10:12:13.160: INFO: Pod "downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a" satisfied condition "Succeeded or Failed" Jun 23 10:12:13.213: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:13.303: INFO: Waiting for pod downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a to disappear Jun 23 10:12:13.329: INFO: Pod downwardapi-volume-a65b3e49-0f3f-4eac-a325-796d092e229a no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.617 seconds][0m [sig-storage] Projected downwardAPI [90mtest/e2e/common/storage/framework.go:23[0m should provide container's cpu request [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":339,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... [1mSTEP[0m: Looking for a node to schedule stateful set and pod [1mSTEP[0m: Creating pod with conflicting port in namespace statefulset-9805 [1mSTEP[0m: Waiting until pod test-pod will start running in namespace statefulset-9805 [1mSTEP[0m: Creating statefulset with conflicting port in namespace statefulset-9805 [1mSTEP[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9805 Jun 23 10:11:58.944: INFO: Observed stateful pod in namespace: statefulset-9805, name: ss-0, uid: eb9347df-ee3a-4d8f-85c4-91507ce3cc2c, status phase: Pending. Waiting for statefulset controller to delete. Jun 23 10:11:58.970: INFO: Observed stateful pod in namespace: statefulset-9805, name: ss-0, uid: eb9347df-ee3a-4d8f-85c4-91507ce3cc2c, status phase: Failed. Waiting for statefulset controller to delete. Jun 23 10:11:58.980: INFO: Observed stateful pod in namespace: statefulset-9805, name: ss-0, uid: eb9347df-ee3a-4d8f-85c4-91507ce3cc2c, status phase: Failed. Waiting for statefulset controller to delete. Jun 23 10:11:58.990: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9805 [1mSTEP[0m: Removing pod with conflicting port in namespace statefulset-9805 [1mSTEP[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9805 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jun 23 10:12:03.165: INFO: Deleting all statefulset in ns statefulset-9805 ... skipping 11 lines ... [90mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90mtest/e2e/apps/statefulset.go:101[0m Should recreate evicted statefulset [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":11,"skipped":284,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-scheduling] LimitRange test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 40 lines ... [90mtest/e2e/scheduling/framework.go:40[0m should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":17,"skipped":374,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 34 lines ... [90mtest/e2e/apimachinery/framework.go:23[0m should mutate custom resource with different stored version [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":7,"skipped":71,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 29 lines ... Jun 23 10:12:12.296: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 23 10:12:12.296: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=kubectl-6115 describe pod agnhost-primary-858st' Jun 23 10:12:12.486: INFO: stderr: "" Jun 23 10:12:12.486: INFO: stdout: "Name: agnhost-primary-858st\nNamespace: kubectl-6115\nPriority: 0\nNode: nodes-us-west3-a-j6c5/10.0.16.2\nStart Time: Thu, 23 Jun 2022 10:12:05 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 100.96.2.27\nIPs:\n IP: 100.96.2.27\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://f32345c566573f4bc2754299723a16203ed48e396b5c893bb139f3a233f88e79\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 23 Jun 2022 10:12:07 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2287f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-2287f:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned kubectl-6115/agnhost-primary-858st to nodes-us-west3-a-j6c5\n Normal Pulled 5s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 5s kubelet Created container agnhost-primary\n Normal Started 5s kubelet Started container agnhost-primary\n" Jun 23 10:12:12.486: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=kubectl-6115 describe rc agnhost-primary' Jun 23 10:12:12.721: INFO: stderr: "" Jun 23 10:12:12.721: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6115\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-primary-858st\n" Jun 23 10:12:12.721: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=kubectl-6115 describe service agnhost-primary' Jun 23 10:12:12.969: INFO: stderr: "" Jun 23 10:12:12.970: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6115\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.66.157.175\nIPs: 100.66.157.175\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.2.27:6379\nSession Affinity: None\nEvents: <none>\n" Jun 23 10:12:13.000: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=kubectl-6115 describe node master-us-west3-a-xwk0' Jun 23 10:12:13.462: INFO: stderr: "" Jun 23 10:12:13.463: INFO: stdout: "Name: master-us-west3-a-xwk0\nRoles: control-plane\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=e2-standard-2\n beta.kubernetes.io/os=linux\n cloud.google.com/metadata-proxy-ready=true\n failure-domain.beta.kubernetes.io/region=us-west3\n failure-domain.beta.kubernetes.io/zone=us-west3-a\n kops.k8s.io/instancegroup=master-us-west3-a\n kops.k8s.io/kops-controller-pki=\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master-us-west3-a-xwk0\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node.kubernetes.io/exclude-from-external-load-balancers=\n node.kubernetes.io/instance-type=e2-standard-2\n topology.gke.io/zone=us-west3-a\n topology.kubernetes.io/region=us-west3\n topology.kubernetes.io/zone=us-west3-a\nAnnotations: csi.volume.kubernetes.io/nodeid:\n {\"pd.csi.storage.gke.io\":\"projects/k8s-boskos-gce-project-15/zones/us-west3-a/instances/master-us-west3-a-xwk0\"}\n io.cilium.network.ipv4-cilium-host: 100.96.0.233\n io.cilium.network.ipv4-health-ip: 100.96.0.132\n io.cilium.network.ipv4-pod-cidr: 100.96.0.0/24\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 23 Jun 2022 10:03:46 +0000\nTaints: node-role.kubernetes.io/control-plane:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master-us-west3-a-xwk0\n AcquireTime: <unset>\n RenewTime: Thu, 23 Jun 2022 10:12:08 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Thu, 23 Jun 2022 10:06:01 +0000 Thu, 23 Jun 2022 10:06:01 +0000 CiliumIsUp Cilium is running on this node\n MemoryPressure False Thu, 23 Jun 2022 10:11:26 +0000 Thu, 23 Jun 2022 10:03:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 23 Jun 2022 10:11:26 +0000 Thu, 23 Jun 2022 10:03:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 23 Jun 2022 10:11:26 +0000 Thu, 23 Jun 2022 10:03:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 23 Jun 2022 10:11:26 +0000 Thu, 23 Jun 2022 10:05:39 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.0.16.6\n ExternalIP: 34.106.71.117\n Hostname: master-us-west3-a-xwk0\nCapacity:\n cpu: 2\n ephemeral-storage: 48600704Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8145396Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 44790408733\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8042996Ki\n pods: 110\nSystem Info:\n Machine ID: ba5d35e8674e13becba8b09933f70658\n System UUID: ba5d35e8-674e-13be-cba8-b09933f70658\n Boot ID: 53df6924-38bc-48ce-8c54-232ae32fe91d\n Kernel Version: 5.11.0-1028-gcp\n OS Image: Ubuntu 20.04.3 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.6\n Kubelet Version: v1.24.2\n Kube-Proxy Version: v1.24.2\nPodCIDR: 100.96.0.0/24\nPodCIDRs: 100.96.0.0/24\nProviderID: gce://k8s-boskos-gce-project-15/us-west3-a/master-us-west3-a-xwk0\nNon-terminated Pods: (13 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n gce-pd-csi-driver csi-gce-pd-controller-7c6b7c9655-djtwd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m53s\n gce-pd-csi-driver csi-gce-pd-node-rdqgd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m54s\n kube-system cilium-operator-56f498975b-bprrp 25m (1%) 0 (0%) 128Mi (1%) 0 (0%) 7m53s\n kube-system cilium-sq9br 100m (5%) 0 (0%) 128Mi (1%) 100Mi (1%) 7m54s\n kube-system cloud-controller-manager-xdsmj 200m (10%) 0 (0%) 0 (0%) 0 (0%) 7m54s\n kube-system dns-controller-6b785dc767-fp6sv 50m (2%) 0 (0%) 50Mi (0%) 0 (0%) 7m53s\n kube-system etcd-manager-events-master-us-west3-a-xwk0 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 7m34s\n kube-system etcd-manager-main-master-us-west3-a-xwk0 200m (10%) 0 (0%) 100Mi (1%) 0 (0%) 7m46s\n kube-system kops-controller-gxr6g 50m (2%) 0 (0%) 50Mi (0%) 0 (0%) 7m54s\n kube-system kube-apiserver-master-us-west3-a-xwk0 150m (7%) 0 (0%) 0 (0%) 0 (0%) 7m34s\n kube-system kube-controller-manager-master-us-west3-a-xwk0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m9s\n kube-system kube-scheduler-master-us-west3-a-xwk0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 7m36s\n kube-system metadata-proxy-v0.12-g7l4h 32m (1%) 32m (1%) 45Mi (0%) 45Mi (0%) 7m34s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1107m (55%) 32m (1%)\n memory 601Mi (7%) 145Mi (1%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal NodeAllocatableEnforced 9m30s kubelet Updated Node Allocatable limit across pods\n Normal NodeHasSufficientMemory 9m29s (x8 over 9m31s) kubelet Node master-us-west3-a-xwk0 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 9m29s (x7 over 9m31s) kubelet Node master-us-west3-a-xwk0 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 9m29s (x7 over 9m31s) kubelet Node master-us-west3-a-xwk0 status is now: NodeHasSufficientPID\n Normal RegisteredNode 7m54s node-controller Node master-us-west3-a-xwk0 event: Registered Node master-us-west3-a-xwk0 in Controller\n Normal Synced 7m36s cloud-node-controller Node synced successfully\n Normal CIDRNotAvailable 7m (x10 over 7m35s) cidrAllocator Node master-us-west3-a-xwk0 status is now: CIDRNotAvailable\n" ... skipping 13 lines ... [90mtest/e2e/kubectl/kubectl.go:1110[0m should check if kubectl describe prints relevant information for rc and pods [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":12,"skipped":182,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context test/e2e/common/node/security_context.go:48 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jun 23 10:12:10.257: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a1318572-12cf-479a-b0b5-3be3ef0b5670" in namespace "security-context-test-6696" to be "Succeeded or Failed" Jun 23 10:12:10.281: INFO: Pod "busybox-user-65534-a1318572-12cf-479a-b0b5-3be3ef0b5670": Phase="Pending", Reason="", readiness=false. Elapsed: 24.303728ms Jun 23 10:12:12.307: INFO: Pod "busybox-user-65534-a1318572-12cf-479a-b0b5-3be3ef0b5670": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049416167s Jun 23 10:12:14.367: INFO: Pod "busybox-user-65534-a1318572-12cf-479a-b0b5-3be3ef0b5670": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110264948s Jun 23 10:12:14.367: INFO: Pod "busybox-user-65534-a1318572-12cf-479a-b0b5-3be3ef0b5670" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:188 Jun 23 10:12:14.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-6696" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":397,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-test-volume-95fe4676-825b-4f81-82f0-ef0a4c91cda4 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:12:08.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13" in namespace "configmap-1436" to be "Succeeded or Failed" Jun 23 10:12:08.197: INFO: Pod "pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13": Phase="Pending", Reason="", readiness=false. Elapsed: 23.340878ms Jun 23 10:12:10.223: INFO: Pod "pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049938374s Jun 23 10:12:12.249: INFO: Pod "pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075346269s Jun 23 10:12:14.306: INFO: Pod "pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133249161s [1mSTEP[0m: Saw pod success Jun 23 10:12:14.307: INFO: Pod "pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13" satisfied condition "Succeeded or Failed" Jun 23 10:12:14.367: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:14.649: INFO: Waiting for pod pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13 to disappear Jun 23 10:12:14.688: INFO: Pod pod-configmaps-9fe460d0-724d-42dd-ba6e-5561b4f2ee13 no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 ... skipping 6 lines ... [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":327,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [1mSTEP[0m: Destroying namespace "services-4651" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 [32m•[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":13,"skipped":220,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 35 lines ... [32m• [SLOW TEST:17.302 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should be able to deny pod and configmap creation [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":14,"skipped":234,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:8.387 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should mutate configmap [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":9,"skipped":202,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:15.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-6776" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":10,"skipped":205,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:16.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-496" for this suite. [32m•[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":8,"skipped":85,"failed":0} [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:12:16.140: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename kubectl [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 9 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:16.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-991" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":9,"skipped":85,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [32m• [SLOW TEST:22.326 seconds][0m [sig-node] Probing container [90mtest/e2e/common/node/framework.go:23[0m with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":434,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 67 lines ... [90mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90mtest/e2e/apps/statefulset.go:101[0m should perform canary updates and phased rolling updates of template modifications [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":4,"skipped":72,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":409,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:12:07.661: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename resourcequota [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 20 lines ... [32m• [SLOW TEST:13.747 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a pod. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":20,"skipped":409,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:11:34.661: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] test/e2e/framework/framework.go:652 Jun 23 10:11:34.890: INFO: created pod Jun 23 10:11:34.890: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6289" to be "Succeeded or Failed" Jun 23 10:11:34.918: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 27.531675ms Jun 23 10:11:36.948: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05749065s Jun 23 10:11:38.973: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082933331s Jun 23 10:11:40.999: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108618013s Jun 23 10:11:43.023: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133333467s Jun 23 10:11:45.050: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159722084s Jun 23 10:11:47.077: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.187216957s Jun 23 10:11:49.104: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.214266516s Jun 23 10:11:51.150: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.260126882s [1mSTEP[0m: Saw pod success Jun 23 10:11:51.150: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jun 23 10:12:21.153: INFO: polling logs Jun 23 10:12:21.279: INFO: Pod logs: I0623 10:11:37.382280 1 log.go:195] OK: Got token I0623 10:11:37.382334 1 log.go:195] validating with in-cluster discovery I0623 10:11:37.382764 1 log.go:195] OK: got issuer https://api.internal.e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local I0623 10:11:37.382799 1 log.go:195] Full, not-validated claims: ... skipping 14 lines ... [32m• [SLOW TEST:46.893 seconds][0m [sig-auth] ServiceAccounts [90mtest/e2e/auth/framework.go:23[0m ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":9,"skipped":106,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:12:14.852: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium Jun 23 10:12:15.130: INFO: Waiting up to 5m0s for pod "pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6" in namespace "emptydir-7296" to be "Succeeded or Failed" Jun 23 10:12:15.167: INFO: Pod "pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6": Phase="Pending", Reason="", readiness=false. Elapsed: 37.039425ms Jun 23 10:12:17.191: INFO: Pod "pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061308161s Jun 23 10:12:19.224: INFO: Pod "pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093829629s Jun 23 10:12:21.304: INFO: Pod "pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173891167s [1mSTEP[0m: Saw pod success Jun 23 10:12:21.304: INFO: Pod "pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6" satisfied condition "Succeeded or Failed" Jun 23 10:12:21.421: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6 container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:21.573: INFO: Waiting for pod pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6 to disappear Jun 23 10:12:21.639: INFO: Pod pod-a27dc4f5-e900-4d7e-bb0b-54673ac4cdd6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.886 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":344,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:10.291 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should mutate custom resource with pruning [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":15,"skipped":361,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:22.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-3120" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":10,"skipped":113,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide container's memory limit [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:12:16.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525" in namespace "projected-3618" to be "Succeeded or Failed" Jun 23 10:12:16.815: INFO: Pod "downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525": Phase="Pending", Reason="", readiness=false. Elapsed: 22.339408ms Jun 23 10:12:18.858: INFO: Pod "downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065383083s Jun 23 10:12:20.913: INFO: Pod "downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120224392s Jun 23 10:12:22.969: INFO: Pod "downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.175553288s [1mSTEP[0m: Saw pod success Jun 23 10:12:22.969: INFO: Pod "downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525" satisfied condition "Succeeded or Failed" Jun 23 10:12:23.026: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:23.143: INFO: Waiting for pod downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525 to disappear Jun 23 10:12:23.179: INFO: Pod downwardapi-volume-672bcf31-5dce-4b26-9c44-08a6d5afe525 no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.666 seconds][0m [sig-storage] Projected downwardAPI [90mtest/e2e/common/storage/framework.go:23[0m should provide container's memory limit [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":112,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [32m• [SLOW TEST:9.409 seconds][0m [sig-storage] Projected downwardAPI [90mtest/e2e/common/storage/framework.go:23[0m should update annotations on modification [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":432,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 47 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Kubectl expose [90mtest/e2e/kubectl/kubectl.go:1249[0m should create services for rc [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":17,"skipped":346,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [32m• [SLOW TEST:11.549 seconds][0m [sig-apps] ReplicationController [90mtest/e2e/apps/framework.go:23[0m should serve a basic image on each replica with a public image [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":18,"skipped":383,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 65 lines ... [90mtest/e2e/common/network/framework.go:23[0m Granular Checks: Pods [90mtest/e2e/common/network/networking.go:32[0m should function for intra-pod communication: http [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":297,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:12:22.841: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80" in namespace "downward-api-2383" to be "Succeeded or Failed" Jun 23 10:12:22.892: INFO: Pod "downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80": Phase="Pending", Reason="", readiness=false. Elapsed: 51.470385ms Jun 23 10:12:24.920: INFO: Pod "downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07901281s Jun 23 10:12:26.962: INFO: Pod "downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120692541s [1mSTEP[0m: Saw pod success Jun 23 10:12:26.962: INFO: Pod "downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80" satisfied condition "Succeeded or Failed" Jun 23 10:12:27.006: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:27.102: INFO: Waiting for pod downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80 to disappear Jun 23 10:12:27.156: INFO: Pod downwardapi-volume-1c35668a-c4fe-47e7-b980-3db3dcc98c80 no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jun 23 10:12:27.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-2383" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":115,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 35 lines ... [32m• [SLOW TEST:12.899 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m RollingUpdateDeployment should delete old pods and create new ones [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":15,"skipped":235,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 463 lines ... [32m• [SLOW TEST:14.502 seconds][0m [sig-network] Service endpoints latency [90mtest/e2e/network/common/framework.go:23[0m should not be very high [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":18,"skipped":351,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":12,"skipped":406,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [32m• [SLOW TEST:5.593 seconds][0m [sig-apps] ReplicaSet [90mtest/e2e/apps/framework.go:23[0m Replicaset should have a working scale subresource [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":11,"skipped":114,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-c7b4f1f9-a134-4d98-b150-183dc7d2a89f [1mSTEP[0m: Creating a pod to test consume secrets Jun 23 10:12:22.643: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613" in namespace "projected-3059" to be "Succeeded or Failed" Jun 23 10:12:22.691: INFO: Pod "pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613": Phase="Pending", Reason="", readiness=false. Elapsed: 47.961797ms Jun 23 10:12:24.718: INFO: Pod "pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074834773s Jun 23 10:12:26.747: INFO: Pod "pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104051638s Jun 23 10:12:28.772: INFO: Pod "pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128820485s [1mSTEP[0m: Saw pod success Jun 23 10:12:28.772: INFO: Pod "pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613" satisfied condition "Succeeded or Failed" Jun 23 10:12:28.803: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:28.968: INFO: Waiting for pod pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613 to disappear Jun 23 10:12:29.008: INFO: Pod pod-projected-secrets-c1e02f59-0b24-477b-ba10-4793478b7613 no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.932 seconds][0m [sig-storage] Projected secret [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":371,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 34 lines ... [90mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90mtest/e2e/apps/statefulset.go:101[0m should have a working scale subresource [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":18,"skipped":354,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Lifecycle Hook test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 29 lines ... [90mtest/e2e/common/node/framework.go:23[0m when create a pod with lifecycle hook [90mtest/e2e/common/node/lifecycle_hook.go:46[0m should execute prestop http hook properly [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":106,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 101 lines ... [32m• [SLOW TEST:14.387 seconds][0m [sig-apps] Deployment [90mtest/e2e/apps/framework.go:23[0m should run the lifecycle of a Deployment [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":11,"skipped":242,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 30 lines ... [32m• [SLOW TEST:8.947 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m should deny crd creation [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":14,"skipped":345,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 9 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:30.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-3975" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":12,"skipped":263,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected combined test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-5a739f28-1d2b-48d1-92cf-3f3c1452ead5 [1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-01606272-3600-4c05-8f13-e5816ad83345 [1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin Jun 23 10:12:27.795: INFO: Waiting up to 5m0s for pod "projected-volume-9eefe644-4a85-43df-8062-a277272538ed" in namespace "projected-7619" to be "Succeeded or Failed" Jun 23 10:12:27.844: INFO: Pod "projected-volume-9eefe644-4a85-43df-8062-a277272538ed": Phase="Pending", Reason="", readiness=false. Elapsed: 48.339151ms Jun 23 10:12:29.918: INFO: Pod "projected-volume-9eefe644-4a85-43df-8062-a277272538ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122772712s Jun 23 10:12:31.960: INFO: Pod "projected-volume-9eefe644-4a85-43df-8062-a277272538ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165073783s Jun 23 10:12:33.987: INFO: Pod "projected-volume-9eefe644-4a85-43df-8062-a277272538ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.192151188s [1mSTEP[0m: Saw pod success Jun 23 10:12:33.988: INFO: Pod "projected-volume-9eefe644-4a85-43df-8062-a277272538ed" satisfied condition "Succeeded or Failed" Jun 23 10:12:34.018: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod projected-volume-9eefe644-4a85-43df-8062-a277272538ed container projected-all-volume-test: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:34.097: INFO: Waiting for pod projected-volume-9eefe644-4a85-43df-8062-a277272538ed to disappear Jun 23 10:12:34.139: INFO: Pod projected-volume-9eefe644-4a85-43df-8062-a277272538ed no longer exists [AfterEach] [sig-storage] Projected combined test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.923 seconds][0m [sig-storage] Projected combined [90mtest/e2e/common/storage/framework.go:23[0m should project all components that make up the projection API [Projection][NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":327,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:12:27.433: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium Jun 23 10:12:27.863: INFO: Waiting up to 5m0s for pod "pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a" in namespace "emptydir-3544" to be "Succeeded or Failed" Jun 23 10:12:27.899: INFO: Pod "pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.255931ms Jun 23 10:12:29.976: INFO: Pod "pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112295295s Jun 23 10:12:32.024: INFO: Pod "pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160264772s Jun 23 10:12:34.064: INFO: Pod "pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200687431s Jun 23 10:12:36.107: INFO: Pod "pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.243846405s [1mSTEP[0m: Saw pod success Jun 23 10:12:36.107: INFO: Pod "pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a" satisfied condition "Succeeded or Failed" Jun 23 10:12:36.159: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:36.266: INFO: Waiting for pod pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a to disappear Jun 23 10:12:36.301: INFO: Pod pod-b7a71f08-255d-4aaa-9ca1-31dd6002476a no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.953 seconds][0m [sig-storage] EmptyDir volumes [90mtest/e2e/common/storage/framework.go:23[0m should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":150,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide container's memory request [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:12:29.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd" in namespace "projected-5742" to be "Succeeded or Failed" Jun 23 10:12:29.379: INFO: Pod "downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 61.190855ms Jun 23 10:12:31.450: INFO: Pod "downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133037229s Jun 23 10:12:33.475: INFO: Pod "downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157383095s Jun 23 10:12:35.529: INFO: Pod "downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211884146s Jun 23 10:12:37.559: INFO: Pod "downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.241585006s [1mSTEP[0m: Saw pod success Jun 23 10:12:37.559: INFO: Pod "downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd" satisfied condition "Succeeded or Failed" Jun 23 10:12:37.599: INFO: Trying to get logs from node nodes-us-west3-a-kn3q pod downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:37.689: INFO: Waiting for pod downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd to disappear Jun 23 10:12:37.715: INFO: Pod downwardapi-volume-f6f48625-2ea1-4add-b6ad-7c596558f1fd no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.763 seconds][0m [sig-storage] Projected downwardAPI [90mtest/e2e/common/storage/framework.go:23[0m should provide container's memory request [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":154,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 17 lines ... Jun 23 10:12:07.270: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:07.295: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:07.321: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:07.347: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:07.374: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:07.399: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:07.399: INFO: Lookups using dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local] Jun 23 10:12:12.427: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.456: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.482: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.509: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.534: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.559: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.585: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.610: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:12.610: INFO: Lookups using dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local] Jun 23 10:12:17.426: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.451: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.476: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.502: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.527: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.555: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.580: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.606: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:17.606: INFO: Lookups using dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local] Jun 23 10:12:22.454: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.577: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.626: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.680: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.750: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.826: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.871: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.914: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:22.914: INFO: Lookups using dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local] Jun 23 10:12:27.456: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.538: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.588: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.633: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.670: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.731: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.801: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.844: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:27.844: INFO: Lookups using dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local] Jun 23 10:12:32.472: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.552: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.618: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.659: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.688: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.721: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.748: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.776: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local from pod dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b: the server could not find the requested resource (get pods dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b) Jun 23 10:12:32.776: INFO: Lookups using dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9481.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9481.svc.cluster.local jessie_udp@dns-test-service-2.dns-9481.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9481.svc.cluster.local] Jun 23 10:12:37.704: INFO: DNS probes using dns-9481/dns-test-4dae6e4e-47d1-4231-bfdd-3dab6423e31b succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS ... skipping 5 lines ... [32m• [SLOW TEST:35.066 seconds][0m [sig-network] DNS [90mtest/e2e/network/common/framework.go:23[0m should provide DNS for pods for Subdomain [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] server version test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:38.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "server-version-8028" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":13,"skipped":172,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating configMap with name configmap-test-volume-8fea05e1-27f3-4753-a64f-de5fb53e49f8 [1mSTEP[0m: Creating a pod to test consume configMaps Jun 23 10:12:29.884: INFO: Waiting up to 5m0s for pod "pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0" in namespace "configmap-4915" to be "Succeeded or Failed" Jun 23 10:12:29.944: INFO: Pod "pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0": Phase="Pending", Reason="", readiness=false. Elapsed: 59.213413ms Jun 23 10:12:31.969: INFO: Pod "pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085039424s Jun 23 10:12:33.996: INFO: Pod "pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111318797s Jun 23 10:12:36.048: INFO: Pod "pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163224892s Jun 23 10:12:38.077: INFO: Pod "pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.192332949s [1mSTEP[0m: Saw pod success Jun 23 10:12:38.077: INFO: Pod "pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0" satisfied condition "Succeeded or Failed" Jun 23 10:12:38.100: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:38.167: INFO: Waiting for pod pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0 to disappear Jun 23 10:12:38.195: INFO: Pod pod-configmaps-03661b9c-e2d8-4192-9dd0-bfa5d33499a0 no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:8.945 seconds][0m [sig-storage] ConfigMap [90mtest/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":422,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 23 lines ... [32m• [SLOW TEST:8.767 seconds][0m [sig-network] DNS [90mtest/e2e/network/common/framework.go:23[0m should provide DNS for the cluster [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":19,"skipped":366,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:12:30.314: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium Jun 23 10:12:30.538: INFO: Waiting up to 5m0s for pod "pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe" in namespace "emptydir-8250" to be "Succeeded or Failed" Jun 23 10:12:30.574: INFO: Pod "pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 36.055299ms Jun 23 10:12:32.619: INFO: Pod "pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08171474s Jun 23 10:12:34.645: INFO: Pod "pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe": Phase="Running", Reason="", readiness=false. Elapsed: 4.107398903s Jun 23 10:12:36.678: INFO: Pod "pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe": Phase="Running", Reason="", readiness=false. Elapsed: 6.140548718s Jun 23 10:12:38.706: INFO: Pod "pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.16772014s [1mSTEP[0m: Saw pod success Jun 23 10:12:38.706: INFO: Pod "pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe" satisfied condition "Succeeded or Failed" Jun 23 10:12:38.771: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe container test-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:38.922: INFO: Waiting for pod pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe to disappear Jun 23 10:12:38.946: INFO: Pod pod-45cdaa09-d2fe-417f-b1a2-c21b49f47bfe no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 ... skipping 58 lines ... [90mtest/e2e/kubectl/framework.go:23[0m Kubectl patch [90mtest/e2e/kubectl/kubectl.go:1486[0m should add annotations for pods in rc [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":13,"skipped":271,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 3 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:12:30.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed" in namespace "downward-api-2115" to be "Succeeded or Failed" Jun 23 10:12:30.980: INFO: Pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 27.517645ms Jun 23 10:12:33.017: INFO: Pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06491236s Jun 23 10:12:35.046: INFO: Pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093192429s Jun 23 10:12:37.075: INFO: Pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122942719s Jun 23 10:12:39.110: INFO: Pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15777662s Jun 23 10:12:41.137: INFO: Pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184445488s [1mSTEP[0m: Saw pod success Jun 23 10:12:41.137: INFO: Pod "downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed" satisfied condition "Succeeded or Failed" Jun 23 10:12:41.163: INFO: Trying to get logs from node nodes-us-west3-a-djk0 pod downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:41.255: INFO: Waiting for pod downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed to disappear Jun 23 10:12:41.281: INFO: Pod downwardapi-volume-83956050-009f-4edb-b6ce-0794abb4b9ed no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:10.613 seconds][0m [sig-storage] Downward API volume [90mtest/e2e/common/storage/framework.go:23[0m should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":348,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client ... skipping 44 lines ... [1mSTEP[0m: Destroying namespace "services-4517" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":16,"skipped":363,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m Jun 23 10:12:42.122: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:42.122: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:42.122: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 40 lines ... [32m• [SLOW TEST:13.441 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m works for CRD preserving unknown fields in an embedded object [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":19,"skipped":379,"failed":0} Jun 23 10:12:42.136: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:42.136: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:42.136: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:42.136: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:42.136: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:42.136: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 22 lines ... [32m• [SLOW TEST:7.342 seconds][0m [sig-api-machinery] ResourceQuota [90mtest/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":13,"skipped":155,"failed":0} Jun 23 10:12:43.768: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:43.768: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:43.768: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:43.768: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:43.768: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:43.768: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 10 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide container's cpu request [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a pod to test downward API volume plugin Jun 23 10:12:38.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41" in namespace "downward-api-3962" to be "Succeeded or Failed" Jun 23 10:12:38.269: INFO: Pod "downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41": Phase="Pending", Reason="", readiness=false. Elapsed: 24.994387ms Jun 23 10:12:40.298: INFO: Pod "downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053966161s Jun 23 10:12:42.350: INFO: Pod "downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106016125s Jun 23 10:12:44.384: INFO: Pod "downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139531947s [1mSTEP[0m: Saw pod success Jun 23 10:12:44.384: INFO: Pod "downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41" satisfied condition "Succeeded or Failed" Jun 23 10:12:44.419: INFO: Trying to get logs from node nodes-us-west3-a-x977 pod downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41 container client-container: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:44.503: INFO: Waiting for pod downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41 to disappear Jun 23 10:12:44.533: INFO: Pod downwardapi-volume-792947f5-9dfe-4df0-8463-77f8a597dd41 no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 ... skipping 4 lines ... [32m• [SLOW TEST:6.554 seconds][0m [sig-storage] Downward API volume [90mtest/e2e/common/storage/framework.go:23[0m should provide container's cpu request [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":121,"failed":0} Jun 23 10:12:44.595: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:44.595: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:44.595: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:44.595: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:44.595: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:44.595: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 27 lines ... [90mtest/e2e/common/node/framework.go:23[0m when scheduling a busybox Pod with hostAliases [90mtest/e2e/common/node/kubelet.go:139[0m should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":179,"failed":0} Jun 23 10:12:44.640: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:44.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:44.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:44.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:44.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:44.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 30 lines ... test/e2e/framework/framework.go:188 Jun 23 10:12:45.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-7701" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":14,"skipped":352,"failed":0} Jun 23 10:12:45.156: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:45.156: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:45.156: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:45.156: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:45.156: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:45.156: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 29 lines ... [32m• [SLOW TEST:7.915 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should be submitted and removed [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":429,"failed":0} Jun 23 10:12:46.219: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:46.219: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:46.219: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:46.219: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:46.219: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:46.219: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 83 lines ... [32m• [SLOW TEST:12.628 seconds][0m [sig-node] KubeletManagedEtcHosts [90mtest/e2e/common/node/framework.go:23[0m should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":328,"failed":0} Jun 23 10:12:46.854: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:46.854: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:46.854: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:46.854: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:46.854: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:46.854: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 85 lines ... [32m• [SLOW TEST:19.775 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":413,"failed":0} Jun 23 10:12:48.377: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:48.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:48.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:48.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:48.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:48.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Jun 23 10:12:48.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Jun 23 10:12:48.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":109,"failed":0} [BeforeEach] version v1 test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:12:39.038: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename proxy [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 347 lines ... [90mtest/e2e/network/common/framework.go:23[0m version v1 [90mtest/e2e/network/proxy.go:74[0m should proxy through a service and a pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":7,"skipped":109,"failed":0} Jun 23 10:12:50.592: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:50.592: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:50.592: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:50.592: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:50.592: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:50.592: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 22 lines ... [32m• [SLOW TEST:26.654 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90mtest/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of different groups [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":16,"skipped":451,"failed":0} Jun 23 10:12:50.964: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:50.964: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:50.964: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:50.964: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:50.964: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:50.964: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 26 lines ... [32m• [SLOW TEST:37.781 seconds][0m [sig-apps] Job [90mtest/e2e/apps/framework.go:23[0m should delete a job [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":14,"skipped":221,"failed":0} Jun 23 10:12:52.602: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:52.602: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:52.602: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:52.602: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:52.602: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:52.602: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 12 lines ... test/e2e/storage/subpath.go:40 [1mSTEP[0m: Setting up data [It] should support subpaths with projected pod [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating pod pod-subpath-test-projected-8f7p [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 23 10:12:26.247: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8f7p" in namespace "subpath-4551" to be "Succeeded or Failed" Jun 23 10:12:26.324: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Pending", Reason="", readiness=false. Elapsed: 76.615219ms Jun 23 10:12:28.347: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10034046s Jun 23 10:12:30.377: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130398997s Jun 23 10:12:32.488: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Running", Reason="", readiness=true. Elapsed: 6.241447377s Jun 23 10:12:34.514: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Running", Reason="", readiness=true. Elapsed: 8.266841949s Jun 23 10:12:36.545: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Running", Reason="", readiness=true. Elapsed: 10.298113135s ... skipping 3 lines ... Jun 23 10:12:44.666: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Running", Reason="", readiness=true. Elapsed: 18.419487236s Jun 23 10:12:46.693: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Running", Reason="", readiness=true. Elapsed: 20.446337414s Jun 23 10:12:48.718: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Running", Reason="", readiness=true. Elapsed: 22.471063981s Jun 23 10:12:50.741: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Running", Reason="", readiness=false. Elapsed: 24.493796807s Jun 23 10:12:52.765: INFO: Pod "pod-subpath-test-projected-8f7p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.518567368s [1mSTEP[0m: Saw pod success Jun 23 10:12:52.766: INFO: Pod "pod-subpath-test-projected-8f7p" satisfied condition "Succeeded or Failed" Jun 23 10:12:52.788: INFO: Trying to get logs from node nodes-us-west3-a-j6c5 pod pod-subpath-test-projected-8f7p container test-container-subpath-projected-8f7p: <nil> [1mSTEP[0m: delete the pod Jun 23 10:12:52.841: INFO: Waiting for pod pod-subpath-test-projected-8f7p to disappear Jun 23 10:12:52.884: INFO: Pod pod-subpath-test-projected-8f7p no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-projected-8f7p Jun 23 10:12:52.884: INFO: Deleting pod "pod-subpath-test-projected-8f7p" in namespace "subpath-4551" ... skipping 8 lines ... [90mtest/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90mtest/e2e/storage/subpath.go:36[0m should support subpaths with projected pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","total":-1,"completed":19,"skipped":393,"failed":0} Jun 23 10:12:52.971: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:52.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:52.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:52.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:52.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:52.971: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 60 lines ... [32m• [SLOW TEST:16.062 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":376,"failed":0} Jun 23 10:12:55.053: INFO: Running AfterSuite actions on all nodes Jun 23 10:12:55.053: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:12:55.053: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:12:55.053: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:12:55.053: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:12:55.053: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 25 lines ... [32m• [SLOW TEST:88.398 seconds][0m [sig-apps] CronJob [90mtest/e2e/apps/framework.go:23[0m should replace jobs when ReplaceConcurrent [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":11,"skipped":98,"failed":0} Jun 23 10:13:01.618: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:01.618: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:01.618: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:01.618: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:01.618: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:01.618: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 7 lines ... Jun 23 10:12:17.567: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/common/node/init_container.go:164 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: creating the pod Jun 23 10:12:17.732: INFO: PodSpec: initContainers in spec.initContainers Jun 23 10:13:02.102: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b4feee46-9137-4cd8-ac9d-e3c099e37338", GenerateName:"", Namespace:"init-container-6525", SelfLink:"", UID:"d91f5885-5de9-47b8-9f08-62ab4a1b191d", ResourceVersion:"16623", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 10, 12, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"732801689"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 10, 12, 17, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003932120), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 10, 12, 25, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003932150), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-mnpx7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc003910100), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-mnpx7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-mnpx7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.7", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-mnpx7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00377e338), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"nodes-us-west3-a-djk0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003220150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00377e3b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00377e3d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00377e3d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00377e3dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003938070), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 10, 12, 17, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 10, 12, 17, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 10, 12, 17, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 10, 12, 17, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.16.3", PodIP:"100.96.3.94", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.3.94"}}, StartTime:time.Date(2022, time.June, 23, 10, 12, 17, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0032202a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003220310)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://890f049ea7ae6ff4e9d1ee1908f67ff3df465320db4b3caca5732c21b4ffb9a0", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003910180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003910160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.7", ImageID:"", ContainerID:"", Started:(*bool)(0xc00377e45f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:188 Jun 23 10:13:02.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-6525" for this suite. [32m• [SLOW TEST:44.588 seconds][0m [sig-node] InitContainer [NodeConformance] [90mtest/e2e/common/node/framework.go:23[0m should not start app containers if init containers fail on a RestartAlways pod [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":16,"skipped":463,"failed":0} Jun 23 10:13:02.163: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:02.163: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:02.163: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:02.163: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:02.163: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:02.163: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 30 lines ... [32m• [SLOW TEST:80.227 seconds][0m [sig-storage] Projected configMap [90mtest/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":345,"failed":0} Jun 23 10:13:03.641: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:03.641: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:03.641: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:03.641: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:03.641: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:03.641: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 89 lines ... [90mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90mtest/e2e/apps/statefulset.go:101[0m should perform rolling updates and roll backs of template modifications [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":4,"skipped":139,"failed":0} Jun 23 10:13:05.366: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:05.366: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:05.366: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:05.366: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:05.366: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:05.366: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 32 lines ... [32m• [SLOW TEST:80.506 seconds][0m [sig-storage] Secrets [90mtest/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":226,"failed":0} Jun 23 10:13:10.832: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:10.832: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:10.832: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:10.832: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:10.832: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:10.832: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 20 lines ... Jun 23 10:12:30.266: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 23 10:12:32.298: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 23 10:12:32.364: INFO: Running '/logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=services-5252 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 23 10:12:32.789: INFO: rc: 7 Jun 23 10:12:32.836: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 23 10:12:32.865: INFO: Pod kube-proxy-mode-detector no longer exists Jun 23 10:12:32.865: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /logs/artifacts/05476543-f2da-11ec-9934-ba3111e5ac70/kubectl --server=https://34.106.25.134 --kubeconfig=/root/.kube/config --namespace=services-5252 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: Command stdout: stderr: + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode command terminated with exit code 7 error: exit status 7 [1mSTEP[0m: creating service affinity-clusterip-timeout in namespace services-5252 [1mSTEP[0m: creating replication controller affinity-clusterip-timeout in namespace services-5252 I0623 10:12:32.929528 39590 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5252, replica count: 3 I0623 10:12:35.981426 39590 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0623 10:12:38.981935 39590 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady ... skipping 45 lines ... [32m• [SLOW TEST:51.732 seconds][0m [sig-network] Services [90mtest/e2e/network/common/framework.go:23[0m should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":443,"failed":0} Jun 23 10:13:13.302: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:13.303: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:13.303: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:13.303: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:13.303: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:13.303: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 31 lines ... [32m• [SLOW TEST:144.628 seconds][0m [sig-node] Probing container [90mtest/e2e/common/node/framework.go:23[0m should have monotonically increasing restart count [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0} Jun 23 10:13:37.178: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:37.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:37.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:37.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:37.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:37.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Jun 23 10:13:37.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Jun 23 10:13:37.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jun 23 10:09:44.839: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename container-probe [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 16 lines ... [32m• [SLOW TEST:246.036 seconds][0m [sig-node] Probing container [90mtest/e2e/common/node/framework.go:23[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} Jun 23 10:13:50.882: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:50.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:50.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:50.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:50.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:50.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 28 lines ... [32m• [SLOW TEST:87.711 seconds][0m [sig-storage] Projected configMap [90mtest/e2e/common/storage/framework.go:23[0m updates should be reflected in volume [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":251,"failed":0} Jun 23 10:13:55.922: INFO: Running AfterSuite actions on all nodes Jun 23 10:13:55.922: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:13:55.922: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:13:55.922: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:13:55.922: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:13:55.922: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 ... skipping 54 lines ... [32m• [SLOW TEST:247.759 seconds][0m [sig-node] Probing container [90mtest/e2e/common/node/framework.go:23[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":250,"failed":0} Jun 23 10:16:16.589: INFO: Running AfterSuite actions on all nodes Jun 23 10:16:16.589: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:16:16.589: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:16:16.589: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:16:16.589: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:16:16.589: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Jun 23 10:16:16.589: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Jun 23 10:16:16.589: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":101,"failed":0} Jun 23 10:15:41.446: INFO: Running AfterSuite actions on all nodes Jun 23 10:15:41.446: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jun 23 10:15:41.446: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jun 23 10:15:41.446: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jun 23 10:15:41.446: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jun 23 10:15:41.446: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Jun 23 10:15:41.446: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Jun 23 10:15:41.446: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 Jun 23 10:16:16.614: INFO: Running AfterSuite actions on node 1 Jun 23 10:16:16.614: INFO: Skipping dumping logs from cluster [1m[32mRan 326 of 6971 Specs in 454.841 seconds[0m [1m[32mSUCCESS![0m -- [32m[1m326 Passed[0m | [91m[1m0 Failed[0m | [33m[1m0 Pending[0m | [36m[1m6645 Skipped[0m Ginkgo ran 1 suite in 7m49.809352126s Test Suite Passed I0623 10:16:16.646788 5932 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops toolbox dump --name e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local --dir /logs/artifacts --private-key /tmp/kops-ssh970144996/key --ssh-user prow I0623 10:16:35.302431 5932 dumplogs.go:78] /home/prow/go/src/k8s.io/kops/.build/dist/linux/amd64/kops get cluster --name e2e-pr13859.pull-kops-e2e-k8s-gce.k8s.local -o yaml ... skipping 198 lines ... Route:e2e-pr13859-pull-kops-e2e--2722aa6e-0a95-49d4-8bdf-db2a753cc690 ok Route:e2e-pr13859-pull-kops-e2e--ceb5d31b-aae6-4c9a-8e30-20c81e44c907 ok Route:e2e-pr13859-pull-kops-e2e--ee9379c1-6ad2-46c1-bc3c-8aa297c11ee7 ok Route:e2e-pr13859-pull-kops-e2e--624964d6-4acb-4eff-bfb4-7d787894dd86 ok HTTP HealthCheck:api-e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local ok Subnet:us-west3-e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local ok E0623 10:18:27.932783 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:18:44.736189 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:19:01.733849 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:19:18.475011 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:19:35.240978 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:19:52.084245 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:20:08.822326 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:20:25.896082 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:20:42.758881 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:20:59.529182 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:21:16.467110 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:21:33.383186 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:21:50.215183 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:22:07.273881 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:22:24.074530 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:22:40.809935 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:22:57.633635 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:23:14.762048 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:23:31.608116 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:23:48.630855 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:24:05.561840 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:24:22.304363 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:24:39.138158 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:24:55.844850 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:25:12.876698 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:25:29.688110 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:25:46.518820 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:26:03.499687 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:26:20.278680 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:26:37.184123 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:26:53.892653 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:27:10.549620 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:27:27.285958 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:27:44.118766 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:28:00.862008 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:28:17.645479 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:28:34.484414 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:28:51.420699 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:29:08.337034 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:29:25.164508 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:29:41.899898 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:29:58.673199 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local E0623 10:30:15.453575 42250 op.go:136] GCE operation failed: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local error deleting resources, will retry: googleapi: Error 400: The network resource 'projects/k8s-boskos-gce-project-15/global/networks/e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local' is already being used by 'projects/k8s-boskos-gce-project-15/global/firewalls/nodeport-external-to-node-ipv6-e2e-pr13859-pull-kops-e2e-covv5m' Not all resources deleted; waiting before reattempting deletion Network:e2e-pr13859-pull-kops-e2e-k8s-gce-k8s-local Error: not making progress deleting resources; giving up Error: exit status 1 + EXIT_VALUE=1 + set +o xtrace