Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 1h13m |
Revision | master |
exit status 255
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest2 Down
kubetest2 Up
... skipping 136 lines ... I0112 17:14:40.353772 5522 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.27.0-alpha.2+v1.27.0-alpha.1-180-ga1a0ce3f33/linux/amd64/kops I0112 17:14:41.708420 5522 local.go:42] ⚙️ ssh-keygen -t ed25519 -N -q -f /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519 I0112 17:14:41.740037 5522 up.go:44] Cleaning up any leaked resources from previous cluster I0112 17:14:41.740152 5522 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops toolbox dump --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user I0112 17:14:41.740169 5522 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops toolbox dump --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user I0112 17:14:41.777037 5542 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true Error: Cluster.kops.k8s.io "e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io" not found W0112 17:14:42.260274 5522 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1 I0112 17:14:42.260331 5522 down.go:48] /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops delete cluster --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --yes I0112 17:14:42.260345 5522 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops delete cluster --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --yes I0112 17:14:42.296940 5552 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io" not found I0112 17:14:42.801482 5522 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2023/01/12 17:14:42 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I0112 17:14:42.815032 5522 http.go:37] curl https://ip.jsb.workers.dev I0112 17:14:42.922715 5522 up.go:167] /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops create cluster --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.26.0 --ssh-public-key /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519.pub --set cluster.spec.nodePortAccess=0.0.0.0/0 --image=137112412989/amzn2-ami-kernel-5.10-hvm-2.0.20221210.1-x86_64-gp2 --channel=alpha --networking=cilium-eni --container-runtime=containerd --admin-access 34.72.201.195/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-1a --master-size c5.large I0112 17:14:42.922758 5522 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops create cluster --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.26.0 --ssh-public-key /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519.pub --set cluster.spec.nodePortAccess=0.0.0.0/0 --image=137112412989/amzn2-ami-kernel-5.10-hvm-2.0.20221210.1-x86_64-gp2 --channel=alpha --networking=cilium-eni --container-runtime=containerd --admin-access 34.72.201.195/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-1a --master-size c5.large I0112 17:14:42.962736 5560 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true I0112 17:14:42.980310 5560 create_cluster.go:884] Using SSH public key: /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519.pub I0112 17:14:43.466435 5560 new_cluster.go:1338] Cloud Provider ID: "aws" ... skipping 516 lines ... NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:15:24.866790 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:15:34.907840 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:15:44.960552 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:15:55.012869 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:16:05.062380 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:16:15.099180 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:16:25.143643 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:16:35.181540 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:16:45.220593 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:16:55.269838 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:17:05.308742 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:17:15.347967 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:17:25.381242 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:17:35.419224 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:17:45.479521 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:17:55.528473 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:18:05.571566 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:18:15.617507 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:18:25.656935 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:18:35.699784 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:18:45.736775 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:18:55.780757 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:19:05.824557 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0112 17:19:15.860357 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 10 lines ... Pod kube-system/cilium-tgktm system-node-critical pod "cilium-tgktm" is not ready (cilium-agent) Pod kube-system/coredns-559769c974-9bds7 system-cluster-critical pod "coredns-559769c974-9bds7" is pending Pod kube-system/coredns-autoscaler-7cb5c5b969-gpjg5 system-cluster-critical pod "coredns-autoscaler-7cb5c5b969-gpjg5" is pending Pod kube-system/ebs-csi-node-csjsv system-node-critical pod "ebs-csi-node-csjsv" is pending Pod kube-system/etcd-manager-events-i-064d67fb1979934c5 system-cluster-critical pod "etcd-manager-events-i-064d67fb1979934c5" is pending Validation Failed W0112 17:19:27.541635 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 9 lines ... Machine i-06e12471aa18677f8 machine "i-06e12471aa18677f8" has not yet joined cluster Pod kube-system/cilium-tgktm system-node-critical pod "cilium-tgktm" is not ready (cilium-agent) Pod kube-system/coredns-559769c974-9bds7 system-cluster-critical pod "coredns-559769c974-9bds7" is pending Pod kube-system/coredns-autoscaler-7cb5c5b969-gpjg5 system-cluster-critical pod "coredns-autoscaler-7cb5c5b969-gpjg5" is pending Pod kube-system/ebs-csi-node-csjsv system-node-critical pod "ebs-csi-node-csjsv" is pending Validation Failed W0112 17:19:38.812178 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 9 lines ... Machine i-06e12471aa18677f8 machine "i-06e12471aa18677f8" has not yet joined cluster Pod kube-system/cilium-tgktm system-node-critical pod "cilium-tgktm" is not ready (cilium-agent) Pod kube-system/coredns-559769c974-9bds7 system-cluster-critical pod "coredns-559769c974-9bds7" is pending Pod kube-system/coredns-autoscaler-7cb5c5b969-gpjg5 system-cluster-critical pod "coredns-autoscaler-7cb5c5b969-gpjg5" is pending Pod kube-system/ebs-csi-node-csjsv system-node-critical pod "ebs-csi-node-csjsv" is pending Validation Failed W0112 17:19:49.930078 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 20 lines ... Pod kube-system/ebs-csi-node-cq42j system-node-critical pod "ebs-csi-node-cq42j" is pending Pod kube-system/ebs-csi-node-csjsv system-node-critical pod "ebs-csi-node-csjsv" is pending Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Pod kube-system/ebs-csi-node-q999d system-node-critical pod "ebs-csi-node-q999d" is pending Pod kube-system/ebs-csi-node-zl947 system-node-critical pod "ebs-csi-node-zl947" is pending Validation Failed W0112 17:20:00.943833 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 16 lines ... Pod kube-system/coredns-autoscaler-7cb5c5b969-gpjg5 system-cluster-critical pod "coredns-autoscaler-7cb5c5b969-gpjg5" is pending Pod kube-system/ebs-csi-node-cq42j system-node-critical pod "ebs-csi-node-cq42j" is pending Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Pod kube-system/ebs-csi-node-q999d system-node-critical pod "ebs-csi-node-q999d" is pending Pod kube-system/ebs-csi-node-zl947 system-node-critical pod "ebs-csi-node-zl947" is pending Validation Failed W0112 17:20:12.098201 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 13 lines ... Pod kube-system/coredns-autoscaler-7cb5c5b969-gpjg5 system-cluster-critical pod "coredns-autoscaler-7cb5c5b969-gpjg5" is pending Pod kube-system/ebs-csi-node-cq42j system-node-critical pod "ebs-csi-node-cq42j" is pending Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Pod kube-system/ebs-csi-node-q999d system-node-critical pod "ebs-csi-node-q999d" is pending Pod kube-system/ebs-csi-node-zl947 system-node-critical pod "ebs-csi-node-zl947" is pending Validation Failed W0112 17:20:23.245095 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 10 lines ... Pod kube-system/cilium-42g6f system-node-critical pod "cilium-42g6f" is not ready (cilium-agent) Pod kube-system/coredns-559769c974-n7qxc system-cluster-critical pod "coredns-559769c974-n7qxc" is pending Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Pod kube-system/ebs-csi-node-q999d system-node-critical pod "ebs-csi-node-q999d" is pending Pod kube-system/ebs-csi-node-zl947 system-node-critical pod "ebs-csi-node-zl947" is pending Validation Failed W0112 17:20:34.515139 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/cilium-42g6f system-node-critical pod "cilium-42g6f" is not ready (cilium-agent) Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Validation Failed W0112 17:20:45.699289 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 8 lines ... VALIDATION ERRORS KIND NAME MESSAGE Node i-01daa1f0ea8dcef5d node "i-01daa1f0ea8dcef5d" of role "node" is not ready Pod kube-system/cilium-42g6f system-node-critical pod "cilium-42g6f" is not ready (cilium-agent) Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Validation Failed W0112 17:20:56.665571 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Node i-01daa1f0ea8dcef5d node "i-01daa1f0ea8dcef5d" of role "node" is not ready Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Validation Failed W0112 17:21:07.773638 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 6 lines ... i-06e12471aa18677f8 node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/ebs-csi-node-j97z7 system-node-critical pod "ebs-csi-node-j97z7" is pending Validation Failed W0112 17:21:18.995600 5602 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS control-plane-us-east-1a ControlPlane c5.large 1 1 us-east-1a nodes-us-east-1a Node t3.medium 4 4 us-east-1a ... skipping 720 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir-link-bindmounted] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:23:39.033: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 171 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:23:39.160: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 73 lines ... [38;5;243m------------------------------[0m [38;5;14mS [SKIPPED] [0.400 seconds][0m External Storage [Driver: ebs.csi.aws.com] [38;5;243mtest/e2e/storage/external/external.go:173[0m [Testpattern: Dynamic PV (immediate binding)] topology [38;5;243mtest/e2e/storage/framework/testsuite.go:50[0m [38;5;14m[1m[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies[0m [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology ... skipping 2 lines ... Jan 12 17:23:38.910: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP:[0m Building a namespace api object, basename topology [38;5;243m01/12/23 17:23:38.911[0m [1mSTEP:[0m Waiting for a default service account to be provisioned in namespace [38;5;243m01/12/23 17:23:39.002[0m [1mSTEP:[0m Waiting for kube-root-ca.crt to be provisioned in namespace [38;5;243m01/12/23 17:23:39.063[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/framework/metrics/init/init.go:31 [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:191 Jan 12 17:23:39.186: INFO: found topology map[topology.ebs.csi.aws.com/zone:us-east-1a] Jan 12 17:23:39.187: INFO: Not enough topologies in cluster -- skipping [1mSTEP:[0m Deleting pvc [38;5;243m01/12/23 17:23:39.187[0m [1mSTEP:[0m Deleting sc [38;5;243m01/12/23 17:23:39.187[0m [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology ... skipping 348 lines ... [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;14mDriver "ebs.csi.aws.com" does not support volume type "InlineVolume" - skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/external/external.go:268[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [7.882 seconds][0m [0m[sig-storage] PersistentVolumes-local [38;5;243mPod with node different from PV's NodeAffinity [0mshould fail scheduling due to different NodeSelector[0m [38;5;243mtest/e2e/storage/persistent_volumes-local.go:382[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [8.540 seconds][0m [0m[sig-storage] Secrets [38;5;243mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [38;5;243mtest/e2e/common/storage/secrets_volume.go:47[0m [38;5;243m------------------------------[0m ... skipping 537 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: azure-file] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:24:10.188: INFO: Only supported for providers [azure] (not aws) ... skipping 601 lines ... [38;5;14mDriver local doesn't support DynamicPV -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;10m•[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [14.371 seconds][0m [0m[sig-apps] Job [38;5;243mshould fail to exceed backoffLimit[0m [38;5;243mtest/e2e/apps/job.go:561[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;14mS [SKIPPED] [0.000 seconds][0m [sig-storage] In-tree Volumes ... skipping 844 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: azure-file] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:25:01.414: INFO: Only supported for providers [azure] (not aws) ... skipping 684 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: azure-disk] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:25:17.182: INFO: Only supported for providers [azure] (not aws) ... skipping 1367 lines ... [sig-storage] CSI Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [38;5;243mtest/e2e/storage/csi_volumes.go:40[0m [38;5;14m[1m[Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:26:05.624: INFO: Driver "csi-hostpath" does not support topology - skipping ... skipping 381 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: emptydir] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:26:18.724: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 10 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: blockfs] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:26:18.728: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 1088 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:27:02.808: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 218 lines ... [0m[sig-storage] CSI mock volume [38;5;243mCSI attach test using mock driver [0mshould require VolumeAttach for ephemermal volume and drivers with attachment[0m [38;5;243mtest/e2e/storage/csi_mock_volume.go:392[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [94.539 seconds][0m [0m[sig-apps] CronJob [38;5;243mshould delete failed finished jobs with limit of one job[0m [38;5;243mtest/e2e/apps/cronjob.go:291[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [6.553 seconds][0m [0m[sig-node] Security Context [38;5;243mshould support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance][0m [38;5;243mtest/e2e/node/security_context.go:129[0m [38;5;243m------------------------------[0m ... skipping 1901 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: tmpfs] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:28:24.102: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 409 lines ... [38;5;14mDriver local doesn't support GenericEphemeralVolume -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [300.523 seconds][0m [0m[sig-storage] Dynamic Provisioning [38;5;243mInvalid AWS KMS key [0mshould report an error and create no PV[0m [38;5;243mtest/e2e/storage/volume_provisioning.go:705[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;10m•[0m [38;5;243m------------------------------[0m [38;5;14mS [SKIPPED] [0.000 seconds][0m [sig-storage] In-tree Volumes ... skipping 61 lines ... [38;5;14mDriver hostPath doesn't support DynamicPV -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [332.873 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/common/network/framework.go:23[0m Granular Checks: Pods [38;5;243mtest/e2e/common/network/networking.go:32[0m [38;5;9m[1m[It] should function for intra-pod communication: udp [NodeConformance] [Conformance][0m [38;5;243mtest/e2e/common/network/networking.go:93[0m ... skipping 350 lines ... Jan 12 17:29:07.951: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.075683984s Jan 12 17:29:07.951: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:29:09.951: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.075612441s Jan 12 17:29:09.951: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:29:09.987: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.111073989s Jan 12 17:29:09.987: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:29:09.988: INFO: Unexpected error: <*pod.timeoutError | 0xc003f19f20>: { msg: "timed out while waiting for pod pod-network-test-8832/netserver-1 to be running and ready", observedObjects: [ <*v1.Pod | 0xc0001db680>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 127 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 17:29:09.988: FAIL: timed out while waiting for pod pod-network-test-8832/netserver-1 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0003180e0, {0x75c6f5c, 0x9}, 0xc003fe8390) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0003180e0, 0x47?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 12 lines ... [1mSTEP:[0m Collecting events from namespace "pod-network-test-8832". [38;5;243m01/12/23 17:29:10.021[0m [1mSTEP:[0m Found 36 events. [38;5;243m01/12/23 17:29:10.052[0m Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned pod-network-test-8832/netserver-0 to i-01daa1f0ea8dcef5d Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned pod-network-test-8832/netserver-1 to i-03f9dde5751a3fd38 Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned pod-network-test-8832/netserver-2 to i-06a506de3e6c2b98a Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned pod-network-test-8832/netserver-3 to i-06e12471aa18677f8 Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:40 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-vw7fw" : failed to sync configmap cache: timed out waiting for the condition Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:40 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.43" Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:40 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.43" Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:41 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.43" Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:41 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.43" Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:42 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:42 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver ... skipping 4 lines ... Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:45 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:45 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:45 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/agnhost:2.43" in 143.229739ms (4.815312634s including waiting) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:46 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Started: Started container webserver Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:46 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Created: Created container webserver Jan 12 17:29:10.052: INFO: At 2023-01-12 17:23:46 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/agnhost:2.43" in 115.234568ms (5.017947833s including waiting) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:24:30 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.42.219:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:24:30 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.42.219:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:24:30 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Liveness probe failed: Get "http://172.20.41.237:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:24:30 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.41.237:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:24:30 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Liveness probe failed: Get "http://172.20.62.94:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:24:30 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.62.94:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:00 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.62.94:8083/healthz": dial tcp 172.20.62.94:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:30 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:30 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:30 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:30 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:30 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Liveness probe failed: Get "http://172.20.41.237:8083/healthz": dial tcp 172.20.41.237:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:30 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:29:10.052: INFO: At 2023-01-12 17:25:30 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:29:10.052: INFO: At 2023-01-12 17:27:00 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.41.237:8083/healthz": dial tcp 172.20.41.237:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:29:10.083: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:29:10.083: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:24:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:24:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC }] Jan 12 17:29:10.083: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC }] Jan 12 17:29:10.083: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC }] Jan 12 17:29:10.083: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:23:39 +0000 UTC }] Jan 12 17:29:10.083: INFO: ... skipping 1767 lines ... [38;5;14mOnly supported for providers [openstack] (not aws)[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/drivers/in_tree.go:973[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [302.300 seconds][0m [sig-network] Services [38;5;243mtest/e2e/network/common/framework.go:23[0m [38;5;9m[1m[It] should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false[0m [38;5;243mtest/e2e/network/service.go:2043[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m ... skipping 314 lines ... Jan 12 17:31:04.032: INFO: Pod "webserver-pod": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.064551319s Jan 12 17:31:04.032: INFO: The phase of Pod webserver-pod is Running (Ready = false) Jan 12 17:31:06.030: INFO: Pod "webserver-pod": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.062751926s Jan 12 17:31:06.030: INFO: The phase of Pod webserver-pod is Running (Ready = false) Jan 12 17:31:06.061: INFO: Pod "webserver-pod": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.093506583s Jan 12 17:31:06.061: INFO: The phase of Pod webserver-pod is Running (Ready = false) Jan 12 17:31:06.061: FAIL: error waiting for pod webserver-pod to be ready timed out while waiting for pod services-6670/webserver-pod to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func26.24() test/e2e/network/service.go:2119 +0x912 [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 ... skipping 6 lines ... [1mSTEP:[0m Collecting events from namespace "services-6670". [38;5;243m01/12/23 17:31:06.093[0m [1mSTEP:[0m Found 9 events. [38;5;243m01/12/23 17:31:06.124[0m Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:05 +0000 UTC - event for webserver-pod: {default-scheduler } Scheduled: Successfully assigned services-6670/webserver-pod to i-03f9dde5751a3fd38 Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:07 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:07 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Created: Created container agnhost Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:07 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Started: Started container agnhost Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:09 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.56.133:80/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:10 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.56.133:80/readyz": dial tcp 172.20.56.133:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:17 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.56.133:80/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:27 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.56.133:80/healthz": dial tcp 172.20.56.133:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:31:06.124: INFO: At 2023-01-12 17:26:37 +0000 UTC - event for webserver-pod: {kubelet i-03f9dde5751a3fd38} Killing: Container agnhost failed liveness probe, will be restarted Jan 12 17:31:06.155: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:31:06.155: INFO: webserver-pod i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:26:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:26:05 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:26:05 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:26:05 +0000 UTC }] Jan 12 17:31:06.155: INFO: Jan 12 17:31:06.220: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 17:31:06.251: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 17541 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:30:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-12 17:30:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 17:30:50 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 17:30:50 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 17:30:50 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 17:30:50 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-026209541c2648d9c kubernetes.io/csi/ebs.csi.aws.com^vol-08b59fe08a54daf3f kubernetes.io/csi/ebs.csi.aws.com^vol-0e4626cc3a1b74520],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-08b59fe08a54daf3f,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-026209541c2648d9c,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0e4626cc3a1b74520,DevicePath:,},},Config:nil,},} ... skipping 235 lines ... Latency metrics for node i-06e12471aa18677f8 [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "services-6670" for this suite. [38;5;243m01/12/23 17:31:07.893[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 17:31:06.061: error waiting for pod webserver-pod to be ready timed out while waiting for pod services-6670/webserver-pod to be running and ready[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/network/service.go:2119[0m [38;5;243m------------------------------[0m [38;5;14mS [SKIPPED] [0.000 seconds][0m [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: azure-file] ... skipping 562 lines ... [38;5;14mDriver emptydir doesn't support DynamicPV -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [304.751 seconds][0m [38;5;9m[1m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [BeforeEach][0m [38;5;243mtest/e2e/apimachinery/webhook.go:90[0m should mutate pod and apply defaults after mutation [Conformance] [38;5;243mtest/e2e/apimachinery/webhook.go:264[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m ... skipping 161 lines ... Jan 12 17:31:53.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 17:31:55.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 17:31:57.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 17:31:59.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 17:32:01.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 17:32:01.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 17:32:01.682: INFO: Unexpected error: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-2128): <*errors.errorString | 0xc000c33780>: { s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-865554f4d9\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } Jan 12 17:32:01.682: FAIL: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-2128): error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.deployWebhookAndService(0xc000f50c30, {0xc003433680, 0x2c}, 0xc004656aa0, 0x20fb, 0x20fc) test/e2e/apimachinery/webhook.go:826 +0xed2 k8s.io/kubernetes/test/e2e/apimachinery.glob..func28.1() test/e2e/apimachinery/webhook.go:102 +0x226 ... skipping 12 lines ... Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:01 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-865554f4d9 to 1 Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:01 +0000 UTC - event for sample-webhook-deployment-865554f4d9: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-865554f4d9-dmzwb Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:01 +0000 UTC - event for sample-webhook-deployment-865554f4d9-dmzwb: {default-scheduler } Scheduled: Successfully assigned webhook-2128/sample-webhook-deployment-865554f4d9-dmzwb to i-01daa1f0ea8dcef5d Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:02 +0000 UTC - event for sample-webhook-deployment-865554f4d9-dmzwb: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:02 +0000 UTC - event for sample-webhook-deployment-865554f4d9-dmzwb: {kubelet i-01daa1f0ea8dcef5d} Created: Created container sample-webhook Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:02 +0000 UTC - event for sample-webhook-deployment-865554f4d9-dmzwb: {kubelet i-01daa1f0ea8dcef5d} Started: Started container sample-webhook Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:04 +0000 UTC - event for sample-webhook-deployment-865554f4d9-dmzwb: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "https://172.20.60.29:8444/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:07 +0000 UTC - event for sample-webhook-deployment-865554f4d9-dmzwb: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "https://172.20.60.29:8444/readyz": dial tcp 172.20.60.29:8444: i/o timeout Jan 12 17:32:01.861: INFO: At 2023-01-12 17:27:13 +0000 UTC - event for sample-webhook-deployment-865554f4d9-dmzwb: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "https://172.20.60.29:8444/readyz": context deadline exceeded Jan 12 17:32:01.889: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:32:01.889: INFO: Jan 12 17:32:01.919: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 17:32:01.947: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 18143 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1131":"csi-mock-csi-mock-volumes-1131","ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:31:31 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-12 17:31:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 17:31:31 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 17:31:31 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 17:31:31 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 17:31:31 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-026209541c2648d9c kubernetes.io/csi/ebs.csi.aws.com^vol-0e4626cc3a1b74520],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-026209541c2648d9c,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0e4626cc3a1b74520,DevicePath:,},},Config:nil,},} Jan 12 17:32:01.948: INFO: ... skipping 512 lines ... [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "webhook-2128" for this suite. [38;5;243m01/12/23 17:32:05.568[0m [1mSTEP:[0m Destroying namespace "webhook-2128-markers" for this suite. [38;5;243m01/12/23 17:32:05.601[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 17:32:01.682: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-2128): error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 17, 27, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)}[0m [38;5;9mIn [1m[BeforeEach][0m[38;5;9m at: [1mtest/e2e/apimachinery/webhook.go:826[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;14mS [SKIPPED] [0.001 seconds][0m [sig-storage] In-tree Volumes ... skipping 525 lines ... [38;5;14mDriver local doesn't support DynamicPV -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;10m•[0m[38;5;14mS[0m[38;5;10m•[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [304.010 seconds][0m [sig-cli] Kubectl client [38;5;243mtest/e2e/kubectl/framework.go:23[0m [38;5;9m[1mSimple pod [BeforeEach][0m [38;5;243mtest/e2e/kubectl/kubectl.go:411[0m should support exec [38;5;243mtest/e2e/kubectl/kubectl.go:421[0m ... skipping 16 lines ... Jan 12 17:27:48.603: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8598 create -f -' Jan 12 17:27:49.908: INFO: stderr: "" Jan 12 17:27:49.908: INFO: stdout: "pod/httpd created\n" Jan 12 17:27:49.908: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 12 17:27:49.908: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-8598" to be "running and ready" Jan 12 17:27:49.936: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.537991ms Jan 12 17:27:49.936: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'i-06a506de3e6c2b98a' to be 'Running' but was 'Pending' Jan 12 17:27:51.967: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05903832s Jan 12 17:27:51.967: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'i-06a506de3e6c2b98a' to be 'Running' but was 'Pending' Jan 12 17:27:53.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.057221539s Jan 12 17:27:53.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:27:55.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.058335481s Jan 12 17:27:55.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:27:57.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.057729816s Jan 12 17:27:57.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:27:59.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.058297803s Jan 12 17:27:59.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:01.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.057394376s Jan 12 17:28:01.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:03.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.05794499s Jan 12 17:28:03.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:05.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.057393231s Jan 12 17:28:05.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:07.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.059321395s Jan 12 17:28:07.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:09.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.058942365s Jan 12 17:28:09.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:11.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.058139691s Jan 12 17:28:11.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:13.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.057185904s Jan 12 17:28:13.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:15.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.057628582s Jan 12 17:28:15.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:17.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.059343235s Jan 12 17:28:17.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:19.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.056891719s Jan 12 17:28:19.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:21.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.058890483s Jan 12 17:28:21.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:23.976: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.068436175s Jan 12 17:28:23.976: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:25.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.057638466s Jan 12 17:28:25.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:27.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 38.058593516s Jan 12 17:28:27.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:29.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 40.056983097s Jan 12 17:28:29.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:31.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 42.058715995s Jan 12 17:28:31.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:33.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 44.056766613s Jan 12 17:28:33.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:35.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 46.057720016s Jan 12 17:28:35.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:37.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 48.058068018s Jan 12 17:28:37.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:39.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 50.057883566s Jan 12 17:28:39.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:41.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 52.057759619s Jan 12 17:28:41.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:43.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 54.057929446s Jan 12 17:28:43.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:45.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 56.05827383s Jan 12 17:28:45.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:47.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 58.056933699s Jan 12 17:28:47.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:49.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.057213592s Jan 12 17:28:49.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:51.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.057104912s Jan 12 17:28:51.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:53.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.057522294s Jan 12 17:28:53.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:55.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.057201128s Jan 12 17:28:55.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:57.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.057759264s Jan 12 17:28:57.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:28:59.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.057872444s Jan 12 17:28:59.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:01.968: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.059963859s Jan 12 17:29:01.968: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:03.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.058061636s Jan 12 17:29:03.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:05.969: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.060977659s Jan 12 17:29:05.969: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:07.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.058162613s Jan 12 17:29:07.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:09.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.058089909s Jan 12 17:29:09.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:11.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.057567722s Jan 12 17:29:11.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:13.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.05801201s Jan 12 17:29:13.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:15.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.057927376s Jan 12 17:29:15.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:17.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.057436035s Jan 12 17:29:17.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:19.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.057803092s Jan 12 17:29:19.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:21.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.057212474s Jan 12 17:29:21.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:23.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.057160403s Jan 12 17:29:23.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:25.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.05769676s Jan 12 17:29:25.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:27.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.057325441s Jan 12 17:29:27.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:29.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.057510808s Jan 12 17:29:29.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:31.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.057919903s Jan 12 17:29:31.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:33.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.057518954s Jan 12 17:29:33.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:35.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.058030317s Jan 12 17:29:35.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:37.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.057269574s Jan 12 17:29:37.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:39.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.057340572s Jan 12 17:29:39.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:41.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.058238283s Jan 12 17:29:41.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:43.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.057968694s Jan 12 17:29:43.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:45.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.058064989s Jan 12 17:29:45.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:47.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.058260218s Jan 12 17:29:47.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:49.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.057294455s Jan 12 17:29:49.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:51.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.058939113s Jan 12 17:29:51.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:53.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.057267411s Jan 12 17:29:53.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:55.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.058389853s Jan 12 17:29:55.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:57.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.058148256s Jan 12 17:29:57.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:29:59.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.078996148s Jan 12 17:29:59.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:01.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.05722146s Jan 12 17:30:01.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:03.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.057330634s Jan 12 17:30:03.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:05.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.05815431s Jan 12 17:30:05.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:07.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.056762671s Jan 12 17:30:07.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:09.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.058745281s Jan 12 17:30:09.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:11.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.057108638s Jan 12 17:30:11.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:13.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.057227704s Jan 12 17:30:13.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:15.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.057543666s Jan 12 17:30:15.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:17.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.057198253s Jan 12 17:30:17.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:19.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.058219366s Jan 12 17:30:19.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:21.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.058035821s Jan 12 17:30:21.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:23.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.058521231s Jan 12 17:30:23.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:25.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.057914854s Jan 12 17:30:25.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:27.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.057270743s Jan 12 17:30:27.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:29.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.057618735s Jan 12 17:30:29.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:31.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.057107618s Jan 12 17:30:31.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:33.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.057463144s Jan 12 17:30:33.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:35.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.057851546s Jan 12 17:30:35.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:37.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.05854732s Jan 12 17:30:37.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:39.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.057363088s Jan 12 17:30:39.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:41.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.05720867s Jan 12 17:30:41.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:43.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.058422087s Jan 12 17:30:43.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:45.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.058398084s Jan 12 17:30:45.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:47.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.057525862s Jan 12 17:30:47.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:49.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.057462176s Jan 12 17:30:49.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:51.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.057047011s Jan 12 17:30:51.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:53.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.05732765s Jan 12 17:30:53.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:55.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.057272887s Jan 12 17:30:55.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:57.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.05733854s Jan 12 17:30:57.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:30:59.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.057417654s Jan 12 17:30:59.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:01.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.057744767s Jan 12 17:31:01.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:03.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.057177521s Jan 12 17:31:03.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:05.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.058251469s Jan 12 17:31:05.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:07.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.05740236s Jan 12 17:31:07.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:09.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.057599232s Jan 12 17:31:09.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:11.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.057555685s Jan 12 17:31:11.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:13.970: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.062442974s Jan 12 17:31:13.970: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:15.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.058149479s Jan 12 17:31:15.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:17.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.0569741s Jan 12 17:31:17.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:19.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.056877722s Jan 12 17:31:19.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:21.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.05788112s Jan 12 17:31:21.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:23.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.057377843s Jan 12 17:31:23.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:25.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.058097899s Jan 12 17:31:25.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:27.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.057510005s Jan 12 17:31:27.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:29.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.057969454s Jan 12 17:31:29.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:31.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.057198901s Jan 12 17:31:31.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:33.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.057204093s Jan 12 17:31:33.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:35.968: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.060526736s Jan 12 17:31:35.968: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:37.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.057688931s Jan 12 17:31:37.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:39.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.057653321s Jan 12 17:31:39.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:41.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.057147388s Jan 12 17:31:41.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:43.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.058078297s Jan 12 17:31:43.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:45.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.057925265s Jan 12 17:31:45.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:47.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.057188802s Jan 12 17:31:47.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:49.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.058048732s Jan 12 17:31:49.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:51.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.057572901s Jan 12 17:31:51.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:53.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.058493562s Jan 12 17:31:53.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:55.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.057747994s Jan 12 17:31:55.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:57.969: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.061633311s Jan 12 17:31:57.969: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:31:59.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.057698728s Jan 12 17:31:59.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:01.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.057308233s Jan 12 17:32:01.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:03.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.058022345s Jan 12 17:32:03.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:05.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.05700199s Jan 12 17:32:05.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:07.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.059031189s Jan 12 17:32:07.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:09.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.057694719s Jan 12 17:32:09.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:11.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.058098294s Jan 12 17:32:11.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:13.967: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.059169891s Jan 12 17:32:13.967: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:15.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.058232227s Jan 12 17:32:15.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:17.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.057975155s Jan 12 17:32:17.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:19.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.058423418s Jan 12 17:32:19.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:21.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.05772508s Jan 12 17:32:21.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:23.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.058074817s Jan 12 17:32:23.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:25.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.057802281s Jan 12 17:32:25.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:27.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.057323736s Jan 12 17:32:27.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:29.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.057550289s Jan 12 17:32:29.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:31.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.057500315s Jan 12 17:32:31.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:33.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.057939453s Jan 12 17:32:33.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:35.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.058642112s Jan 12 17:32:35.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:37.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.057451954s Jan 12 17:32:37.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:39.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.05709873s Jan 12 17:32:39.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:41.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.05816901s Jan 12 17:32:41.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:43.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.058498381s Jan 12 17:32:43.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:45.965: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.057445281s Jan 12 17:32:45.965: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:47.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.058220963s Jan 12 17:32:47.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:49.966: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.058244019s Jan 12 17:32:49.966: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:49.995: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.086750257s Jan 12 17:32:49.995: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-06a506de3e6c2b98a' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:27:49 +0000 UTC }] Jan 12 17:32:49.995: INFO: Pod httpd failed to be running and ready. Jan 12 17:32:49.995: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Jan 12 17:32:49.995: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() ... skipping 21 lines ... [1mSTEP:[0m Collecting events from namespace "kubectl-8598". [38;5;243m01/12/23 17:32:50.769[0m [1mSTEP:[0m Found 6 events. [38;5;243m01/12/23 17:32:50.798[0m Jan 12 17:32:50.798: INFO: At 2023-01-12 17:27:49 +0000 UTC - event for httpd: {default-scheduler } Scheduled: Successfully assigned kubectl-8598/httpd to i-06a506de3e6c2b98a Jan 12 17:32:50.798: INFO: At 2023-01-12 17:27:50 +0000 UTC - event for httpd: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 17:32:50.798: INFO: At 2023-01-12 17:27:50 +0000 UTC - event for httpd: {kubelet i-06a506de3e6c2b98a} Created: Created container httpd Jan 12 17:32:50.798: INFO: At 2023-01-12 17:27:51 +0000 UTC - event for httpd: {kubelet i-06a506de3e6c2b98a} Started: Started container httpd Jan 12 17:32:50.798: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for httpd: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.41.126:80/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:32:50.798: INFO: At 2023-01-12 17:28:55 +0000 UTC - event for httpd: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.41.126:80/": dial tcp 172.20.41.126:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:32:50.826: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:32:50.826: INFO: Jan 12 17:32:50.856: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 17:32:50.886: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 19401 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-3829":"i-01daa1f0ea8dcef5d","ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:32:12 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-12 17:32:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 17:32:12 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 17:32:12 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 17:32:12 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 17:32:12 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0e4626cc3a1b74520],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0e4626cc3a1b74520,DevicePath:,},},Config:nil,},} Jan 12 17:32:50.886: INFO: ... skipping 1095 lines ... test/e2e/framework/node/init/init.go:32 [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;14mDriver hostPath doesn't support GenericEphemeralVolume -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [334.770 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/network/common/framework.go:23[0m [38;5;9m[1m[It] should check kube-proxy urls[0m [38;5;243mtest/e2e/network/networking.go:132[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m ... skipping 354 lines ... Jan 12 17:33:35.003: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.061742417s Jan 12 17:33:35.003: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 12 17:33:37.003: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.061280154s Jan 12 17:33:37.003: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 12 17:33:37.034: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.091802496s Jan 12 17:33:37.034: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 12 17:33:37.035: INFO: Unexpected error: <*pod.timeoutError | 0xc003fffd10>: { msg: "timed out while waiting for pod nettest-1433/netserver-2 to be running and ready", observedObjects: [ <*v1.Pod | 0xc001a82d80>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 128 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 17:33:37.035: FAIL: timed out while waiting for pod nettest-1433/netserver-2 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0016a0000, {0x75c6f5c, 0x9}, 0xc003cdb770) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0016a0000, 0x7fa06c10d648?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 14 lines ... [1mSTEP:[0m Collecting events from namespace "nettest-1433". [38;5;243m01/12/23 17:33:37.067[0m [1mSTEP:[0m Found 21 events. [38;5;243m01/12/23 17:33:37.099[0m Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:04 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-1433/netserver-0 to i-01daa1f0ea8dcef5d Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:04 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-1433/netserver-1 to i-03f9dde5751a3fd38 Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:04 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-1433/netserver-2 to i-06a506de3e6c2b98a Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:04 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-1433/netserver-3 to i-06e12471aa18677f8 Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "58446643f0d1d1cb9bfc4bfa8aa4e0276032f45fa29ea0f00615317759c3c041": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Created: Created container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Started: Started container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:05 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:06 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:06 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:06 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:18 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Started: Started container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:18 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Created: Created container webserver Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:18 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:55 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.32.17:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:37.099: INFO: At 2023-01-12 17:28:55 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Liveness probe failed: Get "http://172.20.32.17:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:37.099: INFO: At 2023-01-12 17:29:25 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Liveness probe failed: Get "http://172.20.32.17:8083/healthz": dial tcp 172.20.32.17:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:37.099: INFO: At 2023-01-12 17:29:55 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:33:37.131: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:33:37.131: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC }] Jan 12 17:33:37.131: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC }] Jan 12 17:33:37.131: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC }] Jan 12 17:33:37.131: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:04 +0000 UTC }] Jan 12 17:33:37.131: INFO: ... skipping 428 lines ... test/e2e/framework/node/init/init.go:32 [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;14mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [314.594 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/network/common/framework.go:23[0m Granular Checks: Services [38;5;243mtest/e2e/network/networking.go:145[0m [38;5;9m[1m[It] should function for node-Service: udp[0m [38;5;243mtest/e2e/network/networking.go:206[0m ... skipping 332 lines ... Jan 12 17:33:45.129: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.05844545s Jan 12 17:33:45.129: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:33:47.130: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.059053729s Jan 12 17:33:47.130: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:33:47.159: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.087789691s Jan 12 17:33:47.159: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:33:47.160: INFO: Unexpected error: <*pod.timeoutError | 0xc00167f200>: { msg: "timed out while waiting for pod nettest-1768/netserver-1 to be running and ready", observedObjects: [ <*v1.Pod | 0xc004a63200>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 128 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 17:33:47.161: FAIL: timed out while waiting for pod nettest-1768/netserver-1 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0054928c0, {0x75c6f5c, 0x9}, 0xc0053b3b90) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0054928c0, 0x7f74ac90ed68?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 26 lines ... Jan 12 17:33:47.220: INFO: At 2023-01-12 17:28:35 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:33:47.220: INFO: At 2023-01-12 17:28:35 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 17:33:47.220: INFO: At 2023-01-12 17:28:35 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 17:33:47.220: INFO: At 2023-01-12 17:28:35 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 17:33:47.220: INFO: At 2023-01-12 17:28:35 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:33:47.220: INFO: At 2023-01-12 17:28:35 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 17:33:47.220: INFO: At 2023-01-12 17:29:15 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.52.155:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:47.220: INFO: At 2023-01-12 17:29:15 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.52.155:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:47.220: INFO: At 2023-01-12 17:29:15 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Liveness probe failed: Get "http://172.20.51.243:8083/healthz": dial tcp 172.20.51.243:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:47.220: INFO: At 2023-01-12 17:29:15 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.51.243:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:47.220: INFO: At 2023-01-12 17:29:45 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Liveness probe failed: Get "http://172.20.51.243:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:47.220: INFO: At 2023-01-12 17:30:15 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:33:47.220: INFO: At 2023-01-12 17:30:15 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:33:47.220: INFO: At 2023-01-12 17:32:35 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.52.155:8083/healthz": dial tcp 172.20.52.155:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:47.248: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:33:47.249: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC }] Jan 12 17:33:47.249: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC }] Jan 12 17:33:47.249: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC }] Jan 12 17:33:47.249: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:28:34 +0000 UTC }] Jan 12 17:33:47.249: INFO: ... skipping 376 lines ... [38;5;243m------------------------------[0m [38;5;14mS [SKIPPED] [0.324 seconds][0m External Storage [Driver: ebs.csi.aws.com] [38;5;243mtest/e2e/storage/external/external.go:173[0m [Testpattern: Dynamic PV (delayed binding)] topology [38;5;243mtest/e2e/storage/framework/testsuite.go:50[0m [38;5;14m[1m[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies[0m [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology ... skipping 2 lines ... Jan 12 17:33:49.712: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP:[0m Building a namespace api object, basename topology [38;5;243m01/12/23 17:33:49.713[0m [1mSTEP:[0m Waiting for a default service account to be provisioned in namespace [38;5;243m01/12/23 17:33:49.802[0m [1mSTEP:[0m Waiting for kube-root-ca.crt to be provisioned in namespace [38;5;243m01/12/23 17:33:49.857[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/framework/metrics/init/init.go:31 [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:191 Jan 12 17:33:49.942: INFO: found topology map[topology.ebs.csi.aws.com/zone:us-east-1a] Jan 12 17:33:49.943: INFO: Not enough topologies in cluster -- skipping [1mSTEP:[0m Deleting pvc [38;5;243m01/12/23 17:33:49.943[0m [1mSTEP:[0m Deleting sc [38;5;243m01/12/23 17:33:49.943[0m [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology ... skipping 294 lines ... [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: hostPathSymlink] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 17:34:17.989: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 533 lines ... [38;5;10m• [SLOW TEST] [62.446 seconds][0m [0m[sig-storage] PersistentVolumes-local [38;5;243m[Volume type: dir-link-bindmounted] [0mTwo pods mounting a local volume one after the other [38;5;243mshould be able to write from pod1 and read from pod2[0m [38;5;243mtest/e2e/storage/persistent_volumes-local.go:258[0m [38;5;243m------------------------------[0m [38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [324.819 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/common/network/framework.go:23[0m Granular Checks: Pods [38;5;243mtest/e2e/common/network/networking.go:32[0m [38;5;9m[1m[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance][0m [38;5;243mtest/e2e/common/network/networking.go:105[0m ... skipping 342 lines ... Jan 12 17:34:33.621: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.064099217s Jan 12 17:34:33.621: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:34:35.618: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.060986374s Jan 12 17:34:35.618: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:34:35.647: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.090438124s Jan 12 17:34:35.647: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 17:34:35.648: INFO: Unexpected error: <*pod.timeoutError | 0xc005788d20>: { msg: "timed out while waiting for pod pod-network-test-6929/netserver-1 to be running and ready", observedObjects: [ <*v1.Pod | 0xc000cf6900>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 127 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 17:34:35.648: FAIL: timed out while waiting for pod pod-network-test-6929/netserver-1 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0020048c0, {0x75c6f5c, 0x9}, 0xc00342eb40) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0020048c0, 0x47?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 12 lines ... [1mSTEP:[0m Collecting events from namespace "pod-network-test-6929". [38;5;243m01/12/23 17:34:35.681[0m [1mSTEP:[0m Found 27 events. [38;5;243m01/12/23 17:34:35.711[0m Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:13 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned pod-network-test-6929/netserver-0 to i-01daa1f0ea8dcef5d Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:13 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned pod-network-test-6929/netserver-1 to i-03f9dde5751a3fd38 Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:13 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned pod-network-test-6929/netserver-2 to i-06a506de3e6c2b98a Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:13 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned pod-network-test-6929/netserver-3 to i-06e12471aa18677f8 Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:13 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7c6f2338f8744f5796d881f422c4e96cf71f9c26963af057848fbd33ac3b9829": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Created: Created container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Started: Started container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Created: Created container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Started: Started container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:14 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:29 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "042318a3fc95629e28c9b475c661241076da2b08e69605d80d7807a0c4792d8c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:40 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e81b7dd0fc0df19a823cf00dfed9e1d6af4045619caf77c35915690266f7c247": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:53 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:53 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:29:53 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 17:34:35.711: INFO: At 2023-01-12 17:30:03 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.53.192:8083/healthz": dial tcp 172.20.53.192:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:34:35.711: INFO: At 2023-01-12 17:30:03 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.53.192:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:34:35.711: INFO: At 2023-01-12 17:30:03 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Liveness probe failed: Get "http://172.20.56.65:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:34:35.711: INFO: At 2023-01-12 17:30:03 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.56.65:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:34:35.711: INFO: At 2023-01-12 17:30:33 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.53.192:8083/healthz": dial tcp 172.20.53.192:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:34:35.711: INFO: At 2023-01-12 17:30:33 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.53.192:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:34:35.711: INFO: At 2023-01-12 17:31:03 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:34:35.711: INFO: At 2023-01-12 17:31:03 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:34:35.741: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:34:35.741: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC }] Jan 12 17:34:35.741: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC }] Jan 12 17:34:35.741: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC }] Jan 12 17:34:35.741: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:29:13 +0000 UTC }] Jan 12 17:34:35.741: INFO: ... skipping 756 lines ... [38;5;14mOnly supported for providers [openstack] (not aws)[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/drivers/in_tree.go:973[0m [38;5;243m------------------------------[0m [38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [314.633 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/network/common/framework.go:23[0m Granular Checks: Services [38;5;243mtest/e2e/network/networking.go:145[0m [38;5;9m[1m[It] should function for pod-Service: udp[0m [38;5;243mtest/e2e/network/networking.go:162[0m ... skipping 340 lines ... Jan 12 17:35:18.028: INFO: Pod "netserver-3": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.057860652s Jan 12 17:35:18.028: INFO: The phase of Pod netserver-3 is Running (Ready = false) Jan 12 17:35:20.027: INFO: Pod "netserver-3": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.05710381s Jan 12 17:35:20.027: INFO: The phase of Pod netserver-3 is Running (Ready = false) Jan 12 17:35:20.056: INFO: Pod "netserver-3": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.085413799s Jan 12 17:35:20.056: INFO: The phase of Pod netserver-3 is Running (Ready = false) Jan 12 17:35:20.057: INFO: Unexpected error: <*pod.timeoutError | 0xc002408660>: { msg: "timed out while waiting for pod nettest-818/netserver-3 to be running and ready", observedObjects: [ <*v1.Pod | 0xc000aa2480>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 128 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 17:35:20.057: FAIL: timed out while waiting for pod nettest-818/netserver-3 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0001efea0, {0x75c6f5c, 0x9}, 0xc002997410) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0001efea0, 0x7fc64a3c36c0?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 23 lines ... Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:08 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Created: Created container webserver Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:08 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Started: Started container webserver Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:08 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:08 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:08 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:08 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:08 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e0f75c12f22937dfd9fe2e5d3bdad6ee731c241061bf09d511d89778967de859": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:19 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4f0de21b31ebf3836137728511cb0767a698652c05c29a65a21aeb1a50ce28b8": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:31 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "43fc51d271f7da8df640576abbf0fece9b7197d87f069b2f3008e95bacdbef8a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:43 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "dbda9734038352f6434b30bcea1584b8f1ab83507dafd3e4b76f3f909611379a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:30:56 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3b07c73d1e4cb2371a40a6c339efbcac3778186dab3d262f0589bae87eb5722f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:31:10 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "211cf7cf4408aa70ae295d1fb93569826232a2a61ddd6e9a3931149307cdbd99": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:31:24 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "dadc19881fb6cc6e3892df2875c5917af1ca255801e28236c84f336b9ffbb39b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:31:35 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7d822b7ddb2a1817afc71b692c148e189f1067b9b62b42d6048064714e832222": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:31:48 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b6991e3732174a2fda094115263e78e7df4421beea0a6a14f1e9485bb4c979f6": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:32:00 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f320aa47998e35f411d7486a1f8a929d7eaef0773c7b765fd1d997d11dae9abe": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:35:20.116: INFO: At 2023-01-12 17:32:14 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:35:20.116: INFO: At 2023-01-12 17:32:14 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 17:35:20.116: INFO: At 2023-01-12 17:32:14 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 17:35:20.116: INFO: At 2023-01-12 17:32:58 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Liveness probe failed: Get "http://172.20.43.227:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:35:20.116: INFO: At 2023-01-12 17:32:58 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.43.227:8083/healthz": dial tcp 172.20.43.227:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:35:20.116: INFO: At 2023-01-12 17:33:28 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.43.227:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:35:20.116: INFO: At 2023-01-12 17:33:58 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:35:20.144: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:35:20.144: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC }] Jan 12 17:35:20.144: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC }] Jan 12 17:35:20.144: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC }] Jan 12 17:35:20.144: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:30:07 +0000 UTC }] Jan 12 17:35:20.144: INFO: ... skipping 580 lines ... [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [44.527 seconds][0m [0m[sig-api-machinery] Garbage collector [38;5;243mshould orphan pods created by rc if delete options say so [Conformance][0m [38;5;243mtest/e2e/apimachinery/garbage_collector.go:370[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [49.104 seconds][0m [0m[sig-node] InitContainer [NodeConformance] [38;5;243mshould not start app containers if init containers fail on a RestartAlways pod [Conformance][0m [38;5;243mtest/e2e/common/node/init_container.go:334[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [67.469 seconds][0m [0m[sig-node] Probing container [38;5;243mshould be restarted by liveness probe after startup probe enables it[0m [38;5;243mtest/e2e/common/node/container_probe.go:379[0m [38;5;243m------------------------------[0m ... skipping 788 lines ... [38;5;14mDriver hostPath doesn't support DynamicPV -- skipping[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/framework/testsuite.go:116[0m [38;5;243m------------------------------[0m [38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [302.343 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/network/common/framework.go:23[0m Granular Checks: Services [38;5;243mtest/e2e/network/networking.go:145[0m [38;5;9m[1m[It] should function for endpoint-Service: http[0m [38;5;243mtest/e2e/network/networking.go:236[0m ... skipping 316 lines ... Jan 12 17:37:13.591: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.062763225s Jan 12 17:37:13.591: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 12 17:37:15.591: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.062240064s Jan 12 17:37:15.591: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 12 17:37:15.621: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.092446836s Jan 12 17:37:15.621: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 12 17:37:15.622: INFO: Unexpected error: <*pod.timeoutError | 0xc0027612c0>: { msg: "timed out while waiting for pod nettest-5073/netserver-0 to be running and ready", observedObjects: [ <*v1.Pod | 0xc00477ed80>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 128 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 17:37:15.622: FAIL: timed out while waiting for pod nettest-5073/netserver-0 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002eeee0, {0x75c6f5c, 0x9}, 0xc003615da0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002eeee0, 0x7fc66848f1a8?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 12 lines ... dump namespaces | framework.go:196 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/12/23 17:37:15.655[0m [1mSTEP:[0m Collecting events from namespace "nettest-5073". [38;5;243m01/12/23 17:37:15.655[0m [1mSTEP:[0m Found 27 events. [38;5;243m01/12/23 17:37:15.685[0m Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:15 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-5073/netserver-0 to i-01daa1f0ea8dcef5d Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:15 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-5073/netserver-1 to i-03f9dde5751a3fd38 Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:15 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "70ca067c558207fce2ad0c083b48945a68c9925f981530429cee7d54e973f577": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:15 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-5073/netserver-2 to i-06a506de3e6c2b98a Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:15 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-5073/netserver-3 to i-06e12471aa18677f8 Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Created: Created container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Started: Started container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:16 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:27 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ec3965c444ca0a62398c29d11b6a3b5ad4343c4ac6241130b7aa2c80309e95b2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:42 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "263704e027f572d4263fcb19a8d92348078b8871f1a64960d83cb1d1ffc35fb6": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:37:15.686: INFO: At 2023-01-12 17:32:56 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "53ff97131c8dc6aad9250c812ba4ed18d44206f00882e6257a4593ecb7b42ee5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:05 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Liveness probe failed: Get "http://172.20.60.29:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:05 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "http://172.20.60.29:8083/healthz": dial tcp 172.20.60.29:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:11 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "97634e82fedf7d4d3e0a42e031aa954acd239ebc3e521e84ff1b8b0c7fd0b36f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:23 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d380a03050820c91e963eca0bfa0c83759660c4c28d582c4afdc4d90c8a320f8": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:34 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f2b74e1eeb53c14a9a8b8aa8600b53d69a29ede0616df005c3088758415753c0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:35 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "http://172.20.60.29:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:48 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Started: Started container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:48 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Created: Created container webserver Jan 12 17:37:15.686: INFO: At 2023-01-12 17:33:48 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 17:37:15.686: INFO: At 2023-01-12 17:34:05 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Killing: Container webserver failed liveness probe, will be restarted Jan 12 17:37:15.717: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 17:37:15.717: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC }] Jan 12 17:37:15.717: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:34:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:34:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC }] Jan 12 17:37:15.717: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC }] Jan 12 17:37:15.717: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:32:15 +0000 UTC }] Jan 12 17:37:15.717: INFO: ... skipping 754 lines ... [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [23.921 seconds][0m [0m[sig-node] SSH [38;5;243mshould SSH to all nodes and run commands[0m [38;5;243mtest/e2e/node/ssh.go:47[0m [38;5;243mBegin Captured StdOut/StdErr Output >>[0m error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying [38;5;243m<< End Captured StdOut/StdErr Output[0m [38;5;243m------------------------------[0m [38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [126.399 seconds][0m [0m[sig-storage] Ephemeralstorage [38;5;243mWhen pod refers to non-existent ephemeral storage [0mshould allow deletion of pod with invalid volume : secret[0m ... skipping 147 lines ... test/e2e/framework/node/init/init.go:32 [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;14mOnly supported for providers [openstack] (not aws)[0m [38;5;14mIn [1m[BeforeEach][0m[38;5;14m at: [1mtest/e2e/storage/drivers/in_tree.go:973[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [874.200 seconds][0m [sig-apps] StatefulSet [38;5;243mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [38;5;243mtest/e2e/apps/statefulset.go:103[0m [38;5;9m[1m[It] should perform rolling updates and roll backs of template modifications with PVCs[0m [38;5;243mtest/e2e/apps/statefulset.go:294[0m ... skipping 188 lines ... Jan 12 17:33:39.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 17:33:39.357: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 12 17:33:39.357: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Jan 12 17:33:39.386: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 17:33:39.386: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 12 17:33:39.386: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Jan 12 17:33:39.386: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801e128?, 0xc002eda1a0}, 0x3, 0x3, 0xc0004daf00) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 ... skipping 358 lines ... 172.20.40.141 - - [12/Jan/2023:17:33:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.40.141 - - [12/Jan/2023:17:33:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.40.141 - - [12/Jan/2023:17:33:40 +0000] "GET /index.html HTTP/1.1" 200 45 Jan 12 17:33:40.491: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8495 describe po ss-2' Jan 12 17:33:40.766: INFO: stderr: "" Jan 12 17:33:40.766: INFO: stdout: "Name: ss-2\nNamespace: statefulset-8495\nPriority: 0\nService Account: default\nNode: i-06e12471aa18677f8/172.20.54.239\nStart Time: Thu, 12 Jan 2023 17:24:38 +0000\nLabels: baz=blah\n controller-revision-hash=ss-6bd77cc946\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-2\nAnnotations: <none>\nStatus: Running\nIP: 172.20.34.228\nIPs:\n IP: 172.20.34.228\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://e0521f37c62b2c45bbaecbd8049423f68fe26f88be4d7c67315c6ed8242940cc\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 12 Jan 2023 17:24:44 +0000\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /data/ from datadir (rw)\n /home from home (rw)\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-97r5p (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n datadir:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: datadir-ss-2\n ReadOnly: false\n home:\n Type: HostPath (bare host directory volume)\n Path: /tmp/home\n HostPathType: \n kube-api-access-97r5p:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9m2s default-scheduler Successfully assigned statefulset-8495/ss-2 to i-06e12471aa18677f8\n Normal SuccessfulAttachVolume 8m59s attachdetach-controller AttachVolume.Attach succeeded for volume \"pvc-6dc65a35-8dd1-43d2-b0e6-6992935b24f3\"\n Normal Pulled 8m56s kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" already present on machine\n Normal Created 8m56s kubelet Created container webserver\n Normal Started 8m56s kubelet Started container webserver\n Warning Unhealthy 8m33s (x4 over 8m54s) kubelet Readiness probe failed: Get \"http://172.20.34.228:80/index.html\": dial tcp 172.20.34.228:80: i/o timeout (Client.Timeout exceeded while awaiting headers)\n Warning Unhealthy 3m55s (x273 over 8m53s) kubelet Readiness probe failed: Get \"http://172.20.34.228:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n" Jan 12 17:33:40.766: INFO: Output of kubectl describe ss-2: Name: ss-2 Namespace: statefulset-8495 Priority: 0 Service Account: default ... skipping 56 lines ... ---- ------ ---- ---- ------- Normal Scheduled 9m2s default-scheduler Successfully assigned statefulset-8495/ss-2 to i-06e12471aa18677f8 Normal SuccessfulAttachVolume 8m59s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-6dc65a35-8dd1-43d2-b0e6-6992935b24f3" Normal Pulled 8m56s kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Normal Created 8m56s kubelet Created container webserver Normal Started 8m56s kubelet Started container webserver Warning Unhealthy 8m33s (x4 over 8m54s) kubelet Readiness probe failed: Get "http://172.20.34.228:80/index.html": dial tcp 172.20.34.228:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 3m55s (x273 over 8m53s) kubelet Readiness probe failed: Get "http://172.20.34.228:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:33:40.766: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8495 logs ss-2 --tail=100' Jan 12 17:33:41.025: INFO: stderr: "" Jan 12 17:33:41.025: INFO: stdout: "[Thu Jan 12 17:24:44.743028 2023] [mpm_event:notice] [pid 1:tid 140403580066664] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Jan 12 17:24:44.743080 2023] [core:notice] [pid 1:tid 140403580066664] AH00094: Command line: 'httpd -D FOREGROUND'\n" Jan 12 17:33:41.025: INFO: Last 100 log lines of ss-2: ... skipping 23 lines ... [1mSTEP:[0m Found 41 events. [38;5;243m01/12/23 17:38:11.468[0m Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for datadir-ss-0: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for datadir-ss-0: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for datadir-ss-0: {ebs.csi.aws.com_i-064d67fb1979934c5.ec2.internal_08c3cee3-26fd-4e93-9225-337adb67e72e } Provisioning: External provisioner is provisioning volume for claim "statefulset-8495/datadir-ss-0" Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:39 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:49 +0000 UTC - event for datadir-ss-0: {ebs.csi.aws.com_i-064d67fb1979934c5.ec2.internal_08c3cee3-26fd-4e93-9225-337adb67e72e } ProvisioningFailed: failed to provision volume with StorageClass "kops-csi-1-21": rpc error: code = DeadlineExceeded desc = context deadline exceeded Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:53 +0000 UTC - event for datadir-ss-0: {ebs.csi.aws.com_i-064d67fb1979934c5.ec2.internal_08c3cee3-26fd-4e93-9225-337adb67e72e } ProvisioningSucceeded: Successfully provisioned volume pvc-199e38a3-a0c3-4e88-a67f-f2ffd88b3e3b Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:54 +0000 UTC - event for ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-8495/ss-0 to i-06a506de3e6c2b98a Jan 12 17:38:11.468: INFO: At 2023-01-12 17:23:58 +0000 UTC - event for ss-0: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-199e38a3-a0c3-4e88-a67f-f2ffd88b3e3b" Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:04 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:04 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:04 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver ... skipping 16 lines ... Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:38 +0000 UTC - event for datadir-ss-2: {ebs.csi.aws.com_i-064d67fb1979934c5.ec2.internal_08c3cee3-26fd-4e93-9225-337adb67e72e } ProvisioningSucceeded: Successfully provisioned volume pvc-6dc65a35-8dd1-43d2-b0e6-6992935b24f3 Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:38 +0000 UTC - event for ss-2: {default-scheduler } Scheduled: Successfully assigned statefulset-8495/ss-2 to i-06e12471aa18677f8 Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:41 +0000 UTC - event for ss-2: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-6dc65a35-8dd1-43d2-b0e6-6992935b24f3" Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:44 +0000 UTC - event for ss-2: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:44 +0000 UTC - event for ss-2: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:44 +0000 UTC - event for ss-2: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:46 +0000 UTC - event for ss-2: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.34.228:80/index.html": dial tcp 172.20.34.228:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 17:38:11.468: INFO: At 2023-01-12 17:24:47 +0000 UTC - event for ss-2: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.34.228:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:38:11.468: INFO: At 2023-01-12 17:33:41 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-2 in StatefulSet ss successful Jan 12 17:38:11.468: INFO: At 2023-01-12 17:33:41 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-1 in StatefulSet ss successful Jan 12 17:38:11.468: INFO: At 2023-01-12 17:33:41 +0000 UTC - event for ss-1: {kubelet i-01daa1f0ea8dcef5d} Killing: Stopping container webserver Jan 12 17:38:11.468: INFO: At 2023-01-12 17:33:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Jan 12 17:38:11.468: INFO: At 2023-01-12 17:33:42 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Killing: Stopping container webserver Jan 12 17:38:11.497: INFO: POD NODE PHASE GRACE CONDITIONS ... skipping 61297 lines ... Jan 12 18:07:50.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:07:52.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:07:54.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:07:56.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:07:58.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:07:58.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:07:58.493: INFO: Unexpected error: waiting for sample-crd-conversion-webhook-deployment deployment status valid: <*errors.errorString | 0xc0013d6750>: { s: "error waiting for deployment \"sample-crd-conversion-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-crd-conversion-webhook-deployment-74ff66dd47\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } Jan 12 18:07:58.494: FAIL: waiting for sample-crd-conversion-webhook-deployment deployment status valid: error waiting for deployment "sample-crd-conversion-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.deployCustomResourceWebhookAndService(0xc000db9680, {0xc003fe1320, 0x2c}, 0xc0045f8e10, 0x24e3, 0x24e4) test/e2e/apimachinery/crd_conversion_webhook.go:327 +0xe19 k8s.io/kubernetes/test/e2e/apimachinery.glob..func5.1() test/e2e/apimachinery/crd_conversion_webhook.go:136 +0x206 ... skipping 12 lines ... Jan 12 18:07:58.687: INFO: At 2023-01-12 18:02:58 +0000 UTC - event for sample-crd-conversion-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-crd-conversion-webhook-deployment-74ff66dd47 to 1 Jan 12 18:07:58.687: INFO: At 2023-01-12 18:02:58 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47: {replicaset-controller } SuccessfulCreate: Created pod: sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9 Jan 12 18:07:58.687: INFO: At 2023-01-12 18:02:58 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9: {default-scheduler } Scheduled: Successfully assigned crd-webhook-9209/sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9 to i-01daa1f0ea8dcef5d Jan 12 18:07:58.687: INFO: At 2023-01-12 18:02:59 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:07:58.687: INFO: At 2023-01-12 18:02:59 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9: {kubelet i-01daa1f0ea8dcef5d} Created: Created container sample-crd-conversion-webhook Jan 12 18:07:58.687: INFO: At 2023-01-12 18:02:59 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9: {kubelet i-01daa1f0ea8dcef5d} Started: Started container sample-crd-conversion-webhook Jan 12 18:07:58.687: INFO: At 2023-01-12 18:03:00 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "https://172.20.41.43:9444/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 12 18:07:58.687: INFO: At 2023-01-12 18:03:04 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "https://172.20.41.43:9444/readyz": dial tcp 172.20.41.43:9444: i/o timeout Jan 12 18:07:58.687: INFO: At 2023-01-12 18:03:17 +0000 UTC - event for sample-crd-conversion-webhook-deployment-74ff66dd47-grxf9: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "https://172.20.41.43:9444/readyz": context deadline exceeded Jan 12 18:07:58.716: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:07:58.716: INFO: Jan 12 18:07:58.746: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:07:58.778: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 53515 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-8462":"i-01daa1f0ea8dcef5d","csi-mock-csi-mock-volumes-9219":"i-01daa1f0ea8dcef5d","ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:07:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2023-01-12 18:07:58 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:07:43 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:07:43 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:07:43 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:07:43 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0ad0aa3a9a8de47ba,DevicePath:,},},Config:nil,},} Jan 12 18:07:58.778: INFO: ... skipping 251 lines ... Latency metrics for node i-06e12471aa18677f8 [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "crd-webhook-9209" for this suite. [38;5;243m01/12/23 18:08:00.451[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:07:58.494: waiting for sample-crd-conversion-webhook-deployment deployment status valid: error waiting for deployment "sample-crd-conversion-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 2, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)}[0m [38;5;9mIn [1m[BeforeEach][0m[38;5;9m at: [1mtest/e2e/apimachinery/crd_conversion_webhook.go:327[0m [38;5;243m------------------------------[0m [38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m [38;5;243m------------------------------[0m [38;5;14mS [SKIPPED] [0.000 seconds][0m [sig-storage] In-tree Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: azure-disk] [38;5;243mtest/e2e/storage/in_tree_volumes.go:85[0m [38;5;14m[1m[Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach][0m [38;5;243mtest/e2e/storage/framework/testsuite.go:51[0m should fail to schedule a pod which has topologies that conflict with AllowedTopologies [38;5;243mtest/e2e/storage/testsuites/topology.go:191[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 Jan 12 18:08:00.496: INFO: Only supported for providers [azure] (not aws) ... skipping 140 lines ... [38;5;243mtest/e2e/storage/volumes.go:50[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [24.394 seconds][0m [0m[sig-storage] PersistentVolumes-local [38;5;243m[Volume type: blockfswithformat] [0mOne pod requesting one prebound PVC [38;5;243mshould be able to mount volume and write from pod1[0m [38;5;243mtest/e2e/storage/persistent_volumes-local.go:241[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [303.644 seconds][0m [sig-cli] Kubectl client [38;5;243mtest/e2e/kubectl/framework.go:23[0m [38;5;9m[1mSimple pod [BeforeEach][0m [38;5;243mtest/e2e/kubectl/kubectl.go:411[0m should support exec using resource/name [38;5;243mtest/e2e/kubectl/kubectl.go:461[0m ... skipping 34 lines ... Jan 12 18:03:23.051: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5735 create -f -' Jan 12 18:03:23.745: INFO: stderr: "" Jan 12 18:03:23.745: INFO: stdout: "pod/httpd created\n" Jan 12 18:03:23.745: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 12 18:03:23.745: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-5735" to be "running and ready" Jan 12 18:03:23.776: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.206846ms Jan 12 18:03:23.776: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'i-03f9dde5751a3fd38' to be 'Running' but was 'Pending' Jan 12 18:03:25.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.060934159s Jan 12 18:03:25.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:27.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.061735556s Jan 12 18:03:27.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:29.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.061746883s Jan 12 18:03:29.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:31.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.060778431s Jan 12 18:03:31.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:33.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.061779707s Jan 12 18:03:33.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:35.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.061421512s Jan 12 18:03:35.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:37.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.061133468s Jan 12 18:03:37.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:39.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.061667155s Jan 12 18:03:39.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:41.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.061386685s Jan 12 18:03:41.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:43.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.062027849s Jan 12 18:03:43.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:45.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.060797359s Jan 12 18:03:45.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:47.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.061541404s Jan 12 18:03:47.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:49.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.061474618s Jan 12 18:03:49.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:51.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.061446402s Jan 12 18:03:51.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:53.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.061781038s Jan 12 18:03:53.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:55.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.061732122s Jan 12 18:03:55.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:57.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.061752166s Jan 12 18:03:57.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:03:59.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.062312858s Jan 12 18:03:59.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:01.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 38.061409864s Jan 12 18:04:01.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:03.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 40.060598613s Jan 12 18:04:03.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:05.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 42.061220267s Jan 12 18:04:05.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:07.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 44.061643491s Jan 12 18:04:07.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:09.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 46.061427899s Jan 12 18:04:09.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:11.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 48.061301159s Jan 12 18:04:11.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:13.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 50.06203429s Jan 12 18:04:13.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:15.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 52.06103041s Jan 12 18:04:15.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:17.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 54.061396s Jan 12 18:04:17.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:19.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 56.061064669s Jan 12 18:04:19.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:21.810: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 58.06427836s Jan 12 18:04:21.810: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:23.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.061100018s Jan 12 18:04:23.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:25.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.061709552s Jan 12 18:04:25.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:27.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.061760632s Jan 12 18:04:27.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:29.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.061708558s Jan 12 18:04:29.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:31.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.06231253s Jan 12 18:04:31.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:33.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.061667121s Jan 12 18:04:33.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:35.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.06118288s Jan 12 18:04:35.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:37.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.061964478s Jan 12 18:04:37.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:39.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.062175493s Jan 12 18:04:39.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:41.811: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.065368793s Jan 12 18:04:41.811: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:43.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.061378752s Jan 12 18:04:43.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:45.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.06091206s Jan 12 18:04:45.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:47.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.061855171s Jan 12 18:04:47.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:49.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.061332953s Jan 12 18:04:49.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:51.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.061647939s Jan 12 18:04:51.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:53.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.061901233s Jan 12 18:04:53.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:55.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.061835119s Jan 12 18:04:55.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:57.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.062030376s Jan 12 18:04:57.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:04:59.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.06165455s Jan 12 18:04:59.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:01.809: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.064035587s Jan 12 18:05:01.809: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:03.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.061674919s Jan 12 18:05:03.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:05.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.062044276s Jan 12 18:05:05.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:07.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.061588031s Jan 12 18:05:07.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:09.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.062285338s Jan 12 18:05:09.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:11.809: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.063759237s Jan 12 18:05:11.809: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:13.809: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.064156651s Jan 12 18:05:13.810: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:15.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.061083945s Jan 12 18:05:15.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:17.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.060792101s Jan 12 18:05:17.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:19.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.061014288s Jan 12 18:05:19.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:21.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.061024679s Jan 12 18:05:21.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:23.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.061823946s Jan 12 18:05:23.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:25.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.062053938s Jan 12 18:05:25.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:27.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.061648729s Jan 12 18:05:27.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:29.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.060867632s Jan 12 18:05:29.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:31.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.060964014s Jan 12 18:05:31.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:33.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.061301453s Jan 12 18:05:33.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:35.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.061816727s Jan 12 18:05:35.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:37.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.061777027s Jan 12 18:05:37.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:39.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.061862671s Jan 12 18:05:39.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:41.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.061637683s Jan 12 18:05:41.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:43.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.061361369s Jan 12 18:05:43.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:45.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.061515654s Jan 12 18:05:45.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:47.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.062575902s Jan 12 18:05:47.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:49.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.061769928s Jan 12 18:05:49.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:51.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.061760828s Jan 12 18:05:51.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:53.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.061039349s Jan 12 18:05:53.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:55.814: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.068879053s Jan 12 18:05:55.814: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:57.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.06195351s Jan 12 18:05:57.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:05:59.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.062155962s Jan 12 18:05:59.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:01.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.06220333s Jan 12 18:06:01.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:03.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.062222777s Jan 12 18:06:03.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:05.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.061647301s Jan 12 18:06:05.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:07.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.061573715s Jan 12 18:06:07.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:09.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.062717072s Jan 12 18:06:09.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:11.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.061788505s Jan 12 18:06:11.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:13.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.061783349s Jan 12 18:06:13.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:15.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.061459771s Jan 12 18:06:15.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:17.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.061557853s Jan 12 18:06:17.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:19.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.062219608s Jan 12 18:06:19.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:21.809: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.06378741s Jan 12 18:06:21.809: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:23.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.062466109s Jan 12 18:06:23.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:25.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.062255403s Jan 12 18:06:25.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:27.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.061346539s Jan 12 18:06:27.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:29.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.061694078s Jan 12 18:06:29.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:31.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.061599759s Jan 12 18:06:31.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:33.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.061316825s Jan 12 18:06:33.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:35.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.061080672s Jan 12 18:06:35.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:37.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.061579079s Jan 12 18:06:37.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:39.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.06230405s Jan 12 18:06:39.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:41.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.060967157s Jan 12 18:06:41.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:43.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.061263068s Jan 12 18:06:43.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:45.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.06118765s Jan 12 18:06:45.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:47.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.062035072s Jan 12 18:06:47.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:49.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.061307327s Jan 12 18:06:49.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:51.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.061764547s Jan 12 18:06:51.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:53.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.061826806s Jan 12 18:06:53.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:55.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.06207878s Jan 12 18:06:55.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:57.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.062003486s Jan 12 18:06:57.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:06:59.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.061034056s Jan 12 18:06:59.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:01.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.061891505s Jan 12 18:07:01.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:03.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.062235879s Jan 12 18:07:03.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:05.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.061319454s Jan 12 18:07:05.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:07.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.06179203s Jan 12 18:07:07.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:09.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.062194459s Jan 12 18:07:09.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:11.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.061264929s Jan 12 18:07:11.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:13.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.061677293s Jan 12 18:07:13.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:15.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.061690292s Jan 12 18:07:15.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:17.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.061997728s Jan 12 18:07:17.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:19.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.061747468s Jan 12 18:07:19.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:21.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.061633925s Jan 12 18:07:21.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:23.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.061664491s Jan 12 18:07:23.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:25.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.06182547s Jan 12 18:07:25.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:27.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.060927617s Jan 12 18:07:27.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:29.808: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.062764213s Jan 12 18:07:29.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:31.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.061410754s Jan 12 18:07:31.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:33.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.061779058s Jan 12 18:07:33.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:35.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.06181826s Jan 12 18:07:35.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:37.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.062008746s Jan 12 18:07:37.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:39.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.061777097s Jan 12 18:07:39.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:41.811: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.066147318s Jan 12 18:07:41.812: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:43.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.06117795s Jan 12 18:07:43.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:45.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.06177069s Jan 12 18:07:45.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:47.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.061800388s Jan 12 18:07:47.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:49.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.061589136s Jan 12 18:07:49.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:51.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.061850021s Jan 12 18:07:51.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:53.809: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.063713062s Jan 12 18:07:53.809: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:55.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.061823443s Jan 12 18:07:55.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:57.810: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.065073597s Jan 12 18:07:57.810: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:07:59.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.06113265s Jan 12 18:07:59.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:01.817: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.071475648s Jan 12 18:08:01.817: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:03.815: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.069224818s Jan 12 18:08:03.815: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:05.810: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.064799051s Jan 12 18:08:05.810: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:07.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.061581179s Jan 12 18:08:07.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:09.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.06177813s Jan 12 18:08:09.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:11.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.061754801s Jan 12 18:08:11.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:13.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.061495148s Jan 12 18:08:13.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:15.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.062080001s Jan 12 18:08:15.808: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:17.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.061763915s Jan 12 18:08:17.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:19.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.061976608s Jan 12 18:08:19.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:21.806: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.061064925s Jan 12 18:08:21.806: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:23.807: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.061799337s Jan 12 18:08:23.807: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:23.838: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.092354404s Jan 12 18:08:23.838: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'i-03f9dde5751a3fd38' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:23 +0000 UTC }] Jan 12 18:08:23.838: INFO: Pod httpd failed to be running and ready. Jan 12 18:08:23.838: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Jan 12 18:08:23.838: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() ... skipping 21 lines ... [1mSTEP:[0m Collecting events from namespace "kubectl-5735". [38;5;243m01/12/23 18:08:24.488[0m [1mSTEP:[0m Found 6 events. [38;5;243m01/12/23 18:08:24.519[0m Jan 12 18:08:24.519: INFO: At 2023-01-12 18:03:23 +0000 UTC - event for httpd: {default-scheduler } Scheduled: Successfully assigned kubectl-5735/httpd to i-03f9dde5751a3fd38 Jan 12 18:08:24.519: INFO: At 2023-01-12 18:03:24 +0000 UTC - event for httpd: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 18:08:24.519: INFO: At 2023-01-12 18:03:24 +0000 UTC - event for httpd: {kubelet i-03f9dde5751a3fd38} Created: Created container httpd Jan 12 18:08:24.519: INFO: At 2023-01-12 18:03:25 +0000 UTC - event for httpd: {kubelet i-03f9dde5751a3fd38} Started: Started container httpd Jan 12 18:08:24.519: INFO: At 2023-01-12 18:03:39 +0000 UTC - event for httpd: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.52.155:80/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:08:24.519: INFO: At 2023-01-12 18:06:49 +0000 UTC - event for httpd: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.52.155:80/": dial tcp 172.20.52.155:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:08:24.549: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:08:24.549: INFO: Jan 12 18:08:24.581: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:08:24.614: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 54118 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-8462":"i-01daa1f0ea8dcef5d","ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-12 18:08:14 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-12 18:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0ad0aa3a9a8de47ba,DevicePath:,},},Config:nil,},} Jan 12 18:08:24.615: INFO: ... skipping 244 lines ... [38;5;243mtest/e2e/storage/csi_mock_volume.go:1413[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [45.752 seconds][0m [0m[sig-storage] In-tree Volumes [38;5;243m[Driver: local][LocalVolumeType: dir-link] [0m[Testpattern: Pre-provisioned PV (default fs)] subPath [38;5;243mshould support file as subpath [LinuxOnly][0m [38;5;243mtest/e2e/storage/testsuites/subpath.go:230[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [304.551 seconds][0m [38;5;9m[1m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [BeforeEach][0m [38;5;243mtest/e2e/apimachinery/webhook.go:90[0m should mutate custom resource [Conformance] [38;5;243mtest/e2e/apimachinery/webhook.go:291[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m ... skipping 161 lines ... Jan 12 18:08:49.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:08:51.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:08:53.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:08:55.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:08:57.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:08:57.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:08:57.253: INFO: Unexpected error: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-9247): <*errors.errorString | 0xc001434680>: { s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-865554f4d9\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } Jan 12 18:08:57.253: FAIL: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-9247): error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.deployWebhookAndService(0xc000c40e10, {0xc005933e30, 0x2c}, 0xc0022347d0, 0x20fb, 0x20fc) test/e2e/apimachinery/webhook.go:826 +0xed2 k8s.io/kubernetes/test/e2e/apimachinery.glob..func28.1() test/e2e/apimachinery/webhook.go:102 +0x226 ... skipping 12 lines ... Jan 12 18:08:57.441: INFO: At 2023-01-12 18:03:57 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-865554f4d9 to 1 Jan 12 18:08:57.441: INFO: At 2023-01-12 18:03:57 +0000 UTC - event for sample-webhook-deployment-865554f4d9: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-865554f4d9-jqj6f Jan 12 18:08:57.441: INFO: At 2023-01-12 18:03:57 +0000 UTC - event for sample-webhook-deployment-865554f4d9-jqj6f: {default-scheduler } Scheduled: Successfully assigned webhook-9247/sample-webhook-deployment-865554f4d9-jqj6f to i-06a506de3e6c2b98a Jan 12 18:08:57.441: INFO: At 2023-01-12 18:03:57 +0000 UTC - event for sample-webhook-deployment-865554f4d9-jqj6f: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:08:57.441: INFO: At 2023-01-12 18:03:57 +0000 UTC - event for sample-webhook-deployment-865554f4d9-jqj6f: {kubelet i-06a506de3e6c2b98a} Created: Created container sample-webhook Jan 12 18:08:57.441: INFO: At 2023-01-12 18:03:57 +0000 UTC - event for sample-webhook-deployment-865554f4d9-jqj6f: {kubelet i-06a506de3e6c2b98a} Started: Started container sample-webhook Jan 12 18:08:57.441: INFO: At 2023-01-12 18:03:59 +0000 UTC - event for sample-webhook-deployment-865554f4d9-jqj6f: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "https://172.20.52.204:8444/readyz": context deadline exceeded Jan 12 18:08:57.441: INFO: At 2023-01-12 18:04:00 +0000 UTC - event for sample-webhook-deployment-865554f4d9-jqj6f: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "https://172.20.52.204:8444/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 12 18:08:57.441: INFO: At 2023-01-12 18:04:02 +0000 UTC - event for sample-webhook-deployment-865554f4d9-jqj6f: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "https://172.20.52.204:8444/readyz": dial tcp 172.20.52.204:8444: i/o timeout Jan 12 18:08:57.471: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:08:57.471: INFO: Jan 12 18:08:57.502: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:08:57.533: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 54145 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:08:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:08:57.533: INFO: ... skipping 424 lines ... [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "webhook-9247" for this suite. [38;5;243m01/12/23 18:09:00.962[0m [1mSTEP:[0m Destroying namespace "webhook-9247-markers" for this suite. [38;5;243m01/12/23 18:09:00.994[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:08:57.253: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-9247): error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 3, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)}[0m [38;5;9mIn [1m[BeforeEach][0m[38;5;9m at: [1mtest/e2e/apimachinery/webhook.go:826[0m [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [288.401 seconds][0m [0m[sig-network] CVE-2021-29923 [38;5;243mIPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal[0m [38;5;243mtest/e2e/network/funny_ips.go:93[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [607.121 seconds][0m [sig-apps] StatefulSet [38;5;243mtest/e2e/apps/framework.go:23[0m [38;5;9m[1m[It] AvailableReplicas should get updated accordingly when MinReadySeconds is enabled[0m [38;5;243mtest/e2e/apps/statefulset.go:1169[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m ... skipping 71 lines ... Jan 12 18:09:54.957: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Jan 12 18:10:04.957: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Jan 12 18:10:14.957: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Jan 12 18:10:24.957: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Jan 12 18:10:34.957: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Jan 12 18:10:34.985: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Jan 12 18:10:34.986: FAIL: Failed waiting for stateful set status.AvailableReplicas updated to 2: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusAvailableReplicas({0x801e128?, 0xc0042d5380}, 0xc003284500, 0x2) test/e2e/framework/statefulset/wait.go:145 +0x231 k8s.io/kubernetes/test/e2e/apps.glob..func10.5() test/e2e/apps/statefulset.go:1190 +0x36f ... skipping 9 lines ... [1mSTEP:[0m Found 7 events. [38;5;243m01/12/23 18:10:35.045[0m Jan 12 18:10:35.046: INFO: At 2023-01-12 18:00:29 +0000 UTC - event for test-ss: {statefulset-controller } SuccessfulCreate: create Pod test-ss-0 in StatefulSet test-ss successful Jan 12 18:10:35.046: INFO: At 2023-01-12 18:00:29 +0000 UTC - event for test-ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-9/test-ss-0 to i-06a506de3e6c2b98a Jan 12 18:10:35.046: INFO: At 2023-01-12 18:00:30 +0000 UTC - event for test-ss-0: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 18:10:35.046: INFO: At 2023-01-12 18:00:30 +0000 UTC - event for test-ss-0: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 18:10:35.046: INFO: At 2023-01-12 18:00:30 +0000 UTC - event for test-ss-0: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 18:10:35.046: INFO: At 2023-01-12 18:00:32 +0000 UTC - event for test-ss-0: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.41.237:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:35.046: INFO: At 2023-01-12 18:00:34 +0000 UTC - event for test-ss-0: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.41.237:80/index.html": dial tcp 172.20.41.237:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:35.074: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:10:35.074: INFO: test-ss-0 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:29 +0000 UTC }] Jan 12 18:10:35.074: INFO: Jan 12 18:10:35.140: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:10:35.168: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 54145 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:08:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} ... skipping 204 lines ... Latency metrics for node i-06e12471aa18677f8 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "statefulset-9" for this suite. [38;5;243m01/12/23 18:10:36.687[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:10:34.986: Failed waiting for stateful set status.AvailableReplicas updated to 2: timed out waiting for the condition[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/framework/statefulset/wait.go:145[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [324.632 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/network/common/framework.go:23[0m Granular Checks: Services [38;5;243mtest/e2e/network/networking.go:145[0m [38;5;9m[1m[It] should function for pod-Service: http[0m [38;5;243mtest/e2e/network/networking.go:147[0m ... skipping 342 lines ... Jan 12 18:10:37.930: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.067235946s Jan 12 18:10:37.930: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 18:10:39.931: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.067613859s Jan 12 18:10:39.931: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 18:10:39.961: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.098467249s Jan 12 18:10:39.961: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 12 18:10:39.963: INFO: Unexpected error: <*pod.timeoutError | 0xc004dbf0e0>: { msg: "timed out while waiting for pod nettest-3325/netserver-1 to be running and ready", observedObjects: [ <*v1.Pod | 0xc003f8c480>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 128 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 18:10:39.963: FAIL: timed out while waiting for pod nettest-3325/netserver-1 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc003da0a80, {0x75c6f5c, 0x9}, 0xc0041bf800) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc003da0a80, 0x7f793c1e3dc0?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 26 lines ... Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:18 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:18 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:18 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:18 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:18 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:18 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:58 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.40.138:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:58 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.40.138:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:58 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.48.120:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:40.026: INFO: At 2023-01-12 18:05:58 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Liveness probe failed: Get "http://172.20.48.120:8083/healthz": dial tcp 172.20.48.120:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:40.026: INFO: At 2023-01-12 18:06:28 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "http://172.20.48.120:8083/healthz": dial tcp 172.20.48.120:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:40.026: INFO: At 2023-01-12 18:06:28 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Unhealthy: Liveness probe failed: Get "http://172.20.48.120:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:40.026: INFO: At 2023-01-12 18:06:58 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Killing: Container webserver failed liveness probe, will be restarted Jan 12 18:10:40.026: INFO: At 2023-01-12 18:06:58 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Killing: Container webserver failed liveness probe, will be restarted Jan 12 18:10:40.026: INFO: At 2023-01-12 18:08:08 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.40.138:8083/healthz": dial tcp 172.20.40.138:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:40.057: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:10:40.057: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC }] Jan 12 18:10:40.057: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC }] Jan 12 18:10:40.057: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC }] Jan 12 18:10:40.057: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:17 +0000 UTC }] Jan 12 18:10:40.057: INFO: ... skipping 210 lines ... [1mSTEP:[0m Destroying namespace "nettest-3325" for this suite. [38;5;243m01/12/23 18:10:41.951[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:10:39.963: timed out while waiting for pod nettest-3325/netserver-1 to be running and ready[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/framework/network/utils.go:866[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [302.359 seconds][0m [sig-network] Networking [38;5;243mtest/e2e/network/common/framework.go:23[0m Granular Checks: Services [38;5;243mtest/e2e/network/networking.go:145[0m [38;5;9m[1m[It] should function for node-Service: http[0m [38;5;243mtest/e2e/network/networking.go:192[0m ... skipping 316 lines ... Jan 12 18:10:44.595: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.058834885s Jan 12 18:10:44.595: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 12 18:10:46.595: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.059366447s Jan 12 18:10:46.595: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 12 18:10:46.625: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.088567259s Jan 12 18:10:46.625: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 12 18:10:46.626: INFO: Unexpected error: <*pod.timeoutError | 0xc004cdee70>: { msg: "timed out while waiting for pod nettest-6362/netserver-0 to be running and ready", observedObjects: [ <*v1.Pod | 0xc00324b200>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { ... skipping 128 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 18:10:46.626: FAIL: timed out while waiting for pod nettest-6362/netserver-0 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc001128380, {0x75c6f5c, 0x9}, 0xc00338a330) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc001128380, 0x7ff66a3ebcb8?) test/e2e/framework/network/utils.go:763 +0x55 ... skipping 10 lines ... test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/12/23 18:10:46.657[0m [1mSTEP:[0m Collecting events from namespace "nettest-6362". [38;5;243m01/12/23 18:10:46.657[0m [1mSTEP:[0m Found 30 events. [38;5;243m01/12/23 18:10:46.687[0m Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:46 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e1db79645ae45bfe7cf3f46b585ea943c359855a8ff8fc5537afea99f111b1d5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:46 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-6362/netserver-0 to i-01daa1f0ea8dcef5d Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:46 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-6362/netserver-1 to i-03f9dde5751a3fd38 Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:46 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-6362/netserver-2 to i-06a506de3e6c2b98a Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:46 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-6362/netserver-3 to i-06e12471aa18677f8 Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Created: Created container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-2: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Created: Created container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:47 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Started: Started container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:05:48 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Started: Started container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:01 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "849be72aa35458ff29636e59d2ecca1d6a3ac0d9a3eca58074f83206e8a0e61a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:15 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Started: Started container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:15 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Created: Created container webserver Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:15 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:36 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Liveness probe failed: Get "http://172.20.34.228:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:36 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.34.228:8083/healthz": dial tcp 172.20.34.228:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:37 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.52.218:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:37 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.52.218:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:56 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Liveness probe failed: Get "http://172.20.43.186:8083/healthz": dial tcp 172.20.43.186:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:06:56 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "http://172.20.43.186:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:07:06 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Unhealthy: Readiness probe failed: Get "http://172.20.34.228:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:07:26 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Liveness probe failed: Get "http://172.20.43.186:8083/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:07:36 +0000 UTC - event for netserver-3: {kubelet i-06e12471aa18677f8} Killing: Container webserver failed liveness probe, will be restarted Jan 12 18:10:46.687: INFO: At 2023-01-12 18:07:37 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Killing: Container webserver failed liveness probe, will be restarted Jan 12 18:10:46.687: INFO: At 2023-01-12 18:07:37 +0000 UTC - event for netserver-1: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.52.218:8083/healthz": dial tcp 172.20.52.218:8083: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:10:46.687: INFO: At 2023-01-12 18:07:56 +0000 UTC - event for netserver-0: {kubelet i-01daa1f0ea8dcef5d} Killing: Container webserver failed liveness probe, will be restarted Jan 12 18:10:46.717: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:10:46.717: INFO: netserver-0 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC }] Jan 12 18:10:46.717: INFO: netserver-1 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC }] Jan 12 18:10:46.717: INFO: netserver-2 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:06:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:06:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC }] Jan 12 18:10:46.717: INFO: netserver-3 i-06e12471aa18677f8 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:05:46 +0000 UTC }] Jan 12 18:10:46.717: INFO: ... skipping 206 lines ... [1mSTEP:[0m Destroying namespace "nettest-6362" for this suite. [38;5;243m01/12/23 18:10:48.469[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:10:46.626: timed out while waiting for pod nettest-6362/netserver-0 to be running and ready[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/framework/network/utils.go:866[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [304.071 seconds][0m [38;5;9m[1m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [BeforeEach][0m [38;5;243mtest/e2e/apimachinery/webhook.go:90[0m should be able to deny attaching pod [Conformance] [38;5;243mtest/e2e/apimachinery/webhook.go:209[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m ... skipping 162 lines ... Jan 12 18:12:00.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:12:02.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:12:04.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:12:06.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:12:08.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:12:08.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 18:12:08.349: INFO: Unexpected error: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-9341): <*errors.errorString | 0xc0014eb770>: { s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-865554f4d9\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } Jan 12 18:12:08.349: FAIL: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-9341): error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.deployWebhookAndService(0xc000ebdef0, {0xc00406f560, 0x2c}, 0xc004069180, 0x20fb, 0x20fc) test/e2e/apimachinery/webhook.go:826 +0xed2 k8s.io/kubernetes/test/e2e/apimachinery.glob..func28.1() test/e2e/apimachinery/webhook.go:102 +0x226 ... skipping 12 lines ... Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:08 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-865554f4d9 to 1 Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:08 +0000 UTC - event for sample-webhook-deployment-865554f4d9: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-865554f4d9-p8lql Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:08 +0000 UTC - event for sample-webhook-deployment-865554f4d9-p8lql: {default-scheduler } Scheduled: Successfully assigned webhook-9341/sample-webhook-deployment-865554f4d9-p8lql to i-06a506de3e6c2b98a Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:08 +0000 UTC - event for sample-webhook-deployment-865554f4d9-p8lql: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:08 +0000 UTC - event for sample-webhook-deployment-865554f4d9-p8lql: {kubelet i-06a506de3e6c2b98a} Created: Created container sample-webhook Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:09 +0000 UTC - event for sample-webhook-deployment-865554f4d9-p8lql: {kubelet i-06a506de3e6c2b98a} Started: Started container sample-webhook Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:10 +0000 UTC - event for sample-webhook-deployment-865554f4d9-p8lql: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "https://172.20.54.37:8444/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:20 +0000 UTC - event for sample-webhook-deployment-865554f4d9-p8lql: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "https://172.20.54.37:8444/readyz": context deadline exceeded Jan 12 18:12:08.528: INFO: At 2023-01-12 18:07:24 +0000 UTC - event for sample-webhook-deployment-865554f4d9-p8lql: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Get "https://172.20.54.37:8444/readyz": dial tcp 172.20.54.37:8444: i/o timeout Jan 12 18:12:08.556: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:12:08.556: INFO: Jan 12 18:12:08.585: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:12:08.614: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 54145 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:08:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:12:08.614: INFO: ... skipping 376 lines ... [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "webhook-9341" for this suite. [38;5;243m01/12/23 18:12:11.624[0m [1mSTEP:[0m Destroying namespace "webhook-9341-markers" for this suite. [38;5;243m01/12/23 18:12:11.654[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:12:08.349: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.43, string=sample-webhook-deployment, string=webhook-9341): error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 12, 18, 7, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)}[0m [38;5;9mIn [1m[BeforeEach][0m[38;5;9m at: [1mtest/e2e/apimachinery/webhook.go:826[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [612.814 seconds][0m [sig-apps] StatefulSet [38;5;243mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [38;5;243mtest/e2e/apps/statefulset.go:103[0m [38;5;9m[1m[It] should implement legacy replacement when the update strategy is OnDelete[0m [38;5;243mtest/e2e/apps/statefulset.go:509[0m ... skipping 75 lines ... Jan 12 18:12:56.954: INFO: Found 1 stateful pods, waiting for 3 Jan 12 18:13:06.954: INFO: Found 1 stateful pods, waiting for 3 Jan 12 18:13:16.955: INFO: Found 1 stateful pods, waiting for 3 Jan 12 18:13:26.954: INFO: Found 1 stateful pods, waiting for 3 Jan 12 18:13:36.954: INFO: Found 1 stateful pods, waiting for 3 Jan 12 18:13:36.983: INFO: Found 1 stateful pods, waiting for 3 Jan 12 18:13:36.984: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801e128?, 0xc0002a3040}, 0x3, 0x3, 0xc001602f00) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.9() test/e2e/apps/statefulset.go:518 +0x21b [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Jan 12 18:13:37.013: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-7929 describe po ss2-0' Jan 12 18:13:37.307: INFO: stderr: "" Jan 12 18:13:37.307: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-7929\nPriority: 0\nService Account: default\nNode: i-01daa1f0ea8dcef5d/172.20.40.141\nStart Time: Thu, 12 Jan 2023 18:03:36 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-7b6c9599d5\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: <none>\nStatus: Running\nIP: 172.20.44.106\nIPs:\n IP: 172.20.44.106\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://b900f8858b414aa045d619bf1824e06d79c8b93fb16f03e7633c8a894dcd05d9\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 12 Jan 2023 18:03:37 +0000\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpsfc (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-lpsfc:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-7929/ss2-0 to i-01daa1f0ea8dcef5d\n Normal Pulled 10m kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" already present on machine\n Normal Created 10m kubelet Created container webserver\n Normal Started 10m kubelet Started container webserver\n Warning Unhealthy 4m59s (x284 over 9m58s) kubelet Readiness probe failed: Get \"http://172.20.44.106:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n" Jan 12 18:13:37.307: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-7929 Priority: 0 Service Account: default ... skipping 45 lines ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-7929/ss2-0 to i-01daa1f0ea8dcef5d Normal Pulled 10m kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Normal Created 10m kubelet Created container webserver Normal Started 10m kubelet Started container webserver Warning Unhealthy 4m59s (x284 over 9m58s) kubelet Readiness probe failed: Get "http://172.20.44.106:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:13:37.307: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-7929 logs ss2-0 --tail=100' Jan 12 18:13:37.548: INFO: stderr: "" Jan 12 18:13:37.548: INFO: stdout: "[Thu Jan 12 18:03:37.811958 2023] [mpm_event:notice] [pid 1:tid 139740152994664] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Jan 12 18:03:37.812010 2023] [core:notice] [pid 1:tid 139740152994664] AH00094: Command line: 'httpd -D FOREGROUND'\n" Jan 12 18:13:37.548: INFO: Last 100 log lines of ss2-0: ... skipping 16 lines ... [1mSTEP:[0m Found 7 events. [38;5;243m01/12/23 18:13:47.875[0m Jan 12 18:13:47.875: INFO: At 2023-01-12 18:03:36 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful Jan 12 18:13:47.875: INFO: At 2023-01-12 18:03:36 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-7929/ss2-0 to i-01daa1f0ea8dcef5d Jan 12 18:13:47.875: INFO: At 2023-01-12 18:03:37 +0000 UTC - event for ss2-0: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 18:13:47.875: INFO: At 2023-01-12 18:03:37 +0000 UTC - event for ss2-0: {kubelet i-01daa1f0ea8dcef5d} Created: Created container webserver Jan 12 18:13:47.875: INFO: At 2023-01-12 18:03:37 +0000 UTC - event for ss2-0: {kubelet i-01daa1f0ea8dcef5d} Started: Started container webserver Jan 12 18:13:47.875: INFO: At 2023-01-12 18:03:39 +0000 UTC - event for ss2-0: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "http://172.20.44.106:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:13:47.875: INFO: At 2023-01-12 18:13:37 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful Jan 12 18:13:47.904: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:13:47.904: INFO: Jan 12 18:13:47.934: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:13:47.963: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 55493 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} ... skipping 182 lines ... Latency metrics for node i-06e12471aa18677f8 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "statefulset-7929" for this suite. [38;5;243m01/12/23 18:13:49.403[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:13:36.984: Failed waiting for pods to enter running: timed out waiting for the condition[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/framework/statefulset/wait.go:58[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [1803.831 seconds][0m [sig-apps] StatefulSet [38;5;243mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [38;5;243mtest/e2e/apps/statefulset.go:103[0m [38;5;9m[1m[It] should perform canary updates and phased rolling updates of template modifications [Conformance][0m [38;5;243mtest/e2e/apps/statefulset.go:317[0m ... skipping 108 lines ... Jan 12 17:53:40.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 17:53:40.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=false Jan 12 17:53:50.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 17:53:50.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=false Jan 12 17:53:50.242: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 17:53:50.242: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=false Jan 12 17:53:50.242: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801e128?, 0xc0020c3380}, 0x3, 0x3, 0xc0034d6500) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.8() test/e2e/apps/statefulset.go:333 +0x273 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Jan 12 17:53:50.271: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8023 describe po ss2-0' Jan 12 17:53:50.525: INFO: stderr: "" Jan 12 17:53:50.525: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-8023\nPriority: 0\nService Account: default\nNode: i-06a506de3e6c2b98a/172.20.33.153\nStart Time: Thu, 12 Jan 2023 17:43:50 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-7b6c9599d5\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: <none>\nStatus: Running\nIP: 172.20.53.10\nIPs:\n IP: 172.20.53.10\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://2214a7c5757bb4011d2ed34cf40b749caee5860dc1bceefa4f72fbd5958de464\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 12 Jan 2023 17:44:30 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml2gf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-ml2gf:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-8023/ss2-0 to i-06a506de3e6c2b98a\n Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"65e21f7a86ac528898cd4f283323d85713f97acadeaa95b9578d0da294b366b0\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m45s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"93f64d83056ca1f156500d2c4ae16b77671fa551e3f72981b8cc6306f8d4c9c1\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m33s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f27f63a05c0ad819180a03a1ceb2b358b41645e5c23523d6e24c5b18c67088\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Normal Pulled 9m20s kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" already present on machine\n Normal Created 9m20s kubelet Created container webserver\n Normal Started 9m20s kubelet Started container webserver\n" Jan 12 17:53:50.525: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-8023 Priority: 0 Service Account: default ... skipping 42 lines ... Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-8023/ss2-0 to i-06a506de3e6c2b98a Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "65e21f7a86ac528898cd4f283323d85713f97acadeaa95b9578d0da294b366b0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m45s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "93f64d83056ca1f156500d2c4ae16b77671fa551e3f72981b8cc6306f8d4c9c1": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m33s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c4f27f63a05c0ad819180a03a1ceb2b358b41645e5c23523d6e24c5b18c67088": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Normal Pulled 9m20s kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Normal Created 9m20s kubelet Created container webserver Normal Started 9m20s kubelet Started container webserver Jan 12 17:53:50.525: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8023 logs ss2-0 --tail=100' Jan 12 17:53:50.753: INFO: stderr: "" ... skipping 100 lines ... 172.20.33.153 - - [12/Jan/2023:17:53:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.33.153 - - [12/Jan/2023:17:53:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.33.153 - - [12/Jan/2023:17:53:50 +0000] "GET /index.html HTTP/1.1" 200 45 Jan 12 17:53:50.753: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8023 describe po ss2-1' Jan 12 17:53:50.995: INFO: stderr: "" Jan 12 17:53:50.995: INFO: stdout: "Name: ss2-1\nNamespace: statefulset-8023\nPriority: 0\nService Account: default\nNode: i-01daa1f0ea8dcef5d/172.20.40.141\nStart Time: Thu, 12 Jan 2023 17:44:31 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-7b6c9599d5\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations: <none>\nStatus: Running\nIP: 172.20.34.32\nIPs:\n IP: 172.20.34.32\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://4eaa427200ee011ae11ade383e06258eddea77b60092218f9be863c565c90c0e\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 12 Jan 2023 17:49:37 +0000\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86q46 (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-86q46:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9m19s default-scheduler Successfully assigned statefulset-8023/ss2-1 to i-01daa1f0ea8dcef5d\n Warning FailedCreatePodSandBox 9m19s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab6eb10c3b7812ca51ec0f75807ee0ccb8e0e34a0ba7c86ba945c64a3e4d1d\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m5s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"13def0cee1f2367855a3124d95031bb7d97ae57948fd2707a813bcf18fdecd53\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m54s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"561a59fdeec22bf9888c12593453a3bdc6c974fcf664273b22e83cff1ceb60bd\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m39s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"c64e5f3d3af68aa672d87c9c85fdcb70cccb64e2acb6a3d9903e021c93633506\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m26s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5a5a89fa23ab1db7c2aefcfd61ff2d54f00dc85e23f28f0232465d6256e276\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m15s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"7a86724a4234d1b2abffec2ce75bf417058cf4d9e5e78307f6d835be37e88814\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m4s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"6fff8df40e9ad9531eb0dc472e7a8e4b12d3e3071a0d9309314143805d99c8a1\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 7m50s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe6dec1765a7d59052d13db4d120d07f659fb4ef42cfe188058cb9b3d5bbbb7\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 7m37s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"91f635defa7e691a5017d709b1064914135a9d3eb78132e5c649b22ae6cf13ca\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 4m28s (x15 over 7m26s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"f785947709e13ee620629aa0be4b0cafc5a1afb338a54e27af58eb0384772bd3\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Normal Pulled 4m13s kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" already present on machine\n Normal Created 4m13s kubelet Created container webserver\n" Jan 12 17:53:50.995: INFO: Output of kubectl describe ss2-1: Name: ss2-1 Namespace: statefulset-8023 Priority: 0 Service Account: default ... skipping 42 lines ... Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m19s default-scheduler Successfully assigned statefulset-8023/ss2-1 to i-01daa1f0ea8dcef5d Warning FailedCreatePodSandBox 9m19s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "26ab6eb10c3b7812ca51ec0f75807ee0ccb8e0e34a0ba7c86ba945c64a3e4d1d": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m5s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "13def0cee1f2367855a3124d95031bb7d97ae57948fd2707a813bcf18fdecd53": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m54s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "561a59fdeec22bf9888c12593453a3bdc6c974fcf664273b22e83cff1ceb60bd": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m39s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c64e5f3d3af68aa672d87c9c85fdcb70cccb64e2acb6a3d9903e021c93633506": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m26s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4c5a5a89fa23ab1db7c2aefcfd61ff2d54f00dc85e23f28f0232465d6256e276": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m15s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7a86724a4234d1b2abffec2ce75bf417058cf4d9e5e78307f6d835be37e88814": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m4s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6fff8df40e9ad9531eb0dc472e7a8e4b12d3e3071a0d9309314143805d99c8a1": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 7m50s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5fe6dec1765a7d59052d13db4d120d07f659fb4ef42cfe188058cb9b3d5bbbb7": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 7m37s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "91f635defa7e691a5017d709b1064914135a9d3eb78132e5c649b22ae6cf13ca": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 4m28s (x15 over 7m26s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f785947709e13ee620629aa0be4b0cafc5a1afb338a54e27af58eb0384772bd3": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Normal Pulled 4m13s kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Normal Created 4m13s kubelet Created container webserver Jan 12 17:53:50.995: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8023 logs ss2-1 --tail=100' Jan 12 17:53:51.212: INFO: stderr: "" Jan 12 17:53:51.212: INFO: stdout: "[Thu Jan 12 17:49:37.363696 2023] [mpm_event:notice] [pid 1:tid 140620319894376] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Jan 12 17:49:37.363746 2023] [core:notice] [pid 1:tid 140620319894376] AH00094: Command line: 'httpd -D FOREGROUND'\n172.20.40.141 - - [12/Jan/2023:17:49:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:49:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.40.141 - - [12/Jan/2023:17:50:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" ... skipping 76 lines ... 172.20.40.141 - - [12/Jan/2023:17:50:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.40.141 - - [12/Jan/2023:17:50:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.40.141 - - [12/Jan/2023:17:50:51 +0000] "GET /index.html HTTP/1.1" 200 45 Jan 12 17:53:51.212: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8023 describe po ss2-2' Jan 12 17:53:51.460: INFO: stderr: "" Jan 12 17:53:51.460: INFO: stdout: "Name: ss2-2\nNamespace: statefulset-8023\nPriority: 0\nService Account: default\nNode: i-03f9dde5751a3fd38/172.20.40.115\nStart Time: Thu, 12 Jan 2023 17:49:45 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-7b6c9599d5\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations: <none>\nStatus: Running\nIP: 172.20.46.76\nIPs:\n IP: 172.20.46.76\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://7d4f6c0a3795e84adb502abac89447738df9aa04424d370b1276cea5b3344dc2\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 12 Jan 2023 17:49:46 +0000\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rmfzl (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-rmfzl:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4m6s default-scheduler Successfully assigned statefulset-8023/ss2-2 to i-03f9dde5751a3fd38\n Normal Pulled 4m5s kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" already present on machine\n Normal Created 4m5s kubelet Created container webserver\n Normal Started 4m5s kubelet Started container webserver\n Warning Unhealthy 4m3s kubelet Readiness probe failed: Get \"http://172.20.46.76:80/index.html\": dial tcp 172.20.46.76:80: i/o timeout (Client.Timeout exceeded while awaiting headers)\n Warning Unhealthy 3m42s (x21 over 4m2s) kubelet Readiness probe failed: Get \"http://172.20.46.76:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n" Jan 12 17:53:51.460: INFO: Output of kubectl describe ss2-2: Name: ss2-2 Namespace: statefulset-8023 Priority: 0 Service Account: default ... skipping 45 lines ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m6s default-scheduler Successfully assigned statefulset-8023/ss2-2 to i-03f9dde5751a3fd38 Normal Pulled 4m5s kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Normal Created 4m5s kubelet Created container webserver Normal Started 4m5s kubelet Started container webserver Warning Unhealthy 4m3s kubelet Readiness probe failed: Get "http://172.20.46.76:80/index.html": dial tcp 172.20.46.76:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 3m42s (x21 over 4m2s) kubelet Readiness probe failed: Get "http://172.20.46.76:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 17:53:51.460: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-8023 logs ss2-2 --tail=100' Jan 12 17:53:51.703: INFO: stderr: "" Jan 12 17:53:51.703: INFO: stdout: "[Thu Jan 12 17:49:46.854298 2023] [mpm_event:notice] [pid 1:tid 139934177737576] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Jan 12 17:49:46.854617 2023] [core:notice] [pid 1:tid 139934177737576] AH00094: Command line: 'httpd -D FOREGROUND'\n" Jan 12 17:53:51.703: INFO: Last 100 log lines of ss2-2: ... skipping 62 lines ... Jan 12 18:13:11.940: INFO: Waiting for stateful set status.replicas to become 0, currently 3 Jan 12 18:13:21.941: INFO: Waiting for stateful set status.replicas to become 0, currently 3 Jan 12 18:13:31.941: INFO: Waiting for stateful set status.replicas to become 0, currently 3 Jan 12 18:13:41.940: INFO: Waiting for stateful set status.replicas to become 0, currently 3 Jan 12 18:13:51.942: INFO: Waiting for stateful set status.replicas to become 0, currently 3 Jan 12 18:13:51.970: INFO: Waiting for stateful set status.replicas to become 0, currently 3 Jan 12 18:13:51.970: FAIL: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReplicas({0x801e128?, 0xc0020c3380}, 0xc0034d7900, 0x0) test/e2e/framework/statefulset/wait.go:170 +0x231 k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801e128, 0xc0020c3380}, {0xc00350e460, 0x10}) test/e2e/framework/statefulset/rest.go:87 +0x319 ... skipping 8 lines ... dump namespaces | framework.go:196 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/12/23 18:13:52[0m [1mSTEP:[0m Collecting events from namespace "statefulset-8023". [38;5;243m01/12/23 18:13:52[0m [1mSTEP:[0m Found 31 events. [38;5;243m01/12/23 18:13:52.029[0m Jan 12 18:13:52.029: INFO: At 2023-01-12 17:43:50 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful Jan 12 18:13:52.029: INFO: At 2023-01-12 17:43:50 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-8023/ss2-0 to i-06a506de3e6c2b98a Jan 12 18:13:52.029: INFO: At 2023-01-12 17:43:50 +0000 UTC - event for ss2-0: {kubelet i-06a506de3e6c2b98a} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "65e21f7a86ac528898cd4f283323d85713f97acadeaa95b9578d0da294b366b0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:05 +0000 UTC - event for ss2-0: {kubelet i-06a506de3e6c2b98a} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "93f64d83056ca1f156500d2c4ae16b77671fa551e3f72981b8cc6306f8d4c9c1": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:17 +0000 UTC - event for ss2-0: {kubelet i-06a506de3e6c2b98a} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c4f27f63a05c0ad819180a03a1ceb2b358b41645e5c23523d6e24c5b18c67088": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:30 +0000 UTC - event for ss2-0: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:30 +0000 UTC - event for ss2-0: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:30 +0000 UTC - event for ss2-0: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:31 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:31 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "26ab6eb10c3b7812ca51ec0f75807ee0ccb8e0e34a0ba7c86ba945c64a3e4d1d": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:31 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-8023/ss2-1 to i-01daa1f0ea8dcef5d Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:45 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "13def0cee1f2367855a3124d95031bb7d97ae57948fd2707a813bcf18fdecd53": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:44:56 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "561a59fdeec22bf9888c12593453a3bdc6c974fcf664273b22e83cff1ceb60bd": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:45:11 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c64e5f3d3af68aa672d87c9c85fdcb70cccb64e2acb6a3d9903e021c93633506": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:45:24 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4c5a5a89fa23ab1db7c2aefcfd61ff2d54f00dc85e23f28f0232465d6256e276": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:45:35 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7a86724a4234d1b2abffec2ce75bf417058cf4d9e5e78307f6d835be37e88814": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:45:46 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6fff8df40e9ad9531eb0dc472e7a8e4b12d3e3071a0d9309314143805d99c8a1": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:46:00 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5fe6dec1765a7d59052d13db4d120d07f659fb4ef42cfe188058cb9b3d5bbbb7": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:46:13 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "91f635defa7e691a5017d709b1064914135a9d3eb78132e5c649b22ae6cf13ca": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:46:24 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f785947709e13ee620629aa0be4b0cafc5a1afb338a54e27af58eb0384772bd3": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:37 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:37 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} Created: Created container webserver Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:45 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:45 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-8023/ss2-2 to i-03f9dde5751a3fd38 Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:46 +0000 UTC - event for ss2-2: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:46 +0000 UTC - event for ss2-2: {kubelet i-03f9dde5751a3fd38} Created: Created container webserver Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:46 +0000 UTC - event for ss2-2: {kubelet i-03f9dde5751a3fd38} Started: Started container webserver Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:48 +0000 UTC - event for ss2-2: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.46.76:80/index.html": dial tcp 172.20.46.76:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:13:52.029: INFO: At 2023-01-12 17:49:49 +0000 UTC - event for ss2-2: {kubelet i-03f9dde5751a3fd38} Unhealthy: Readiness probe failed: Get "http://172.20.46.76:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:13:52.029: INFO: At 2023-01-12 17:50:53 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "http://172.20.34.32:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:13:52.029: INFO: At 2023-01-12 17:50:55 +0000 UTC - event for ss2-1: {kubelet i-01daa1f0ea8dcef5d} Unhealthy: Readiness probe failed: Get "http://172.20.34.32:80/index.html": dial tcp 172.20.34.32:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 12 18:13:52.057: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:13:52.058: INFO: ss2-0 i-06a506de3e6c2b98a Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:43:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:44:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:44:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:43:50 +0000 UTC }] Jan 12 18:13:52.058: INFO: ss2-1 i-01daa1f0ea8dcef5d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:44:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:50:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:50:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:44:31 +0000 UTC }] Jan 12 18:13:52.058: INFO: ss2-2 i-03f9dde5751a3fd38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:45 +0000 UTC }] Jan 12 18:13:52.058: INFO: Jan 12 18:13:52.180: INFO: ... skipping 184 lines ... Latency metrics for node i-06e12471aa18677f8 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "statefulset-8023" for this suite. [38;5;243m01/12/23 18:13:53.698[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 17:53:50.242: Failed waiting for pods to enter running: timed out waiting for the condition[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/framework/statefulset/wait.go:58[0m [1mThere were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode:[0m [38;5;9m[FAILED][0m in [AfterEach] at test/e2e/framework/statefulset/wait.go:170 [38;5;243m------------------------------[0m [38;5;10m• [SLOW TEST] [966.257 seconds][0m [0m[sig-apps] StatefulSet [38;5;243mBasic StatefulSet functionality [StatefulSetBasic] [0mshould provide basic identity[0m [38;5;243mtest/e2e/apps/statefulset.go:134[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [962.806 seconds][0m [sig-storage] CSI Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [38;5;243mtest/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [38;5;243mtest/e2e/storage/framework/testsuite.go:50[0m ... skipping 160 lines ... Jan 12 18:10:37.720: INFO: PersistentVolume pvc-fa6fa87a-416b-4958-b243-a5c196029bef found and phase=Bound (4m31.670971613s) Jan 12 18:10:42.750: INFO: PersistentVolume pvc-fa6fa87a-416b-4958-b243-a5c196029bef found and phase=Bound (4m36.701513161s) Jan 12 18:10:47.782: INFO: PersistentVolume pvc-fa6fa87a-416b-4958-b243-a5c196029bef found and phase=Bound (4m41.733351971s) Jan 12 18:10:52.812: INFO: PersistentVolume pvc-fa6fa87a-416b-4958-b243-a5c196029bef found and phase=Bound (4m46.76355725s) Jan 12 18:10:57.842: INFO: PersistentVolume pvc-fa6fa87a-416b-4958-b243-a5c196029bef found and phase=Bound (4m51.793471176s) Jan 12 18:11:02.873: INFO: PersistentVolume pvc-fa6fa87a-416b-4958-b243-a5c196029bef found and phase=Bound (4m56.824229152s) Jan 12 18:11:07.905: FAIL: no dangling PVCs Expected <[]v1.PersistentVolumeClaim | len:1, cap:1>: - metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" ... skipping 88 lines ... Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:17 +0000 UTC - event for inline-volume-hcz9v: {default-scheduler } FailedScheduling: 0/5 nodes are available: waiting for ephemeral volume controller to create the persistentvolumeclaim "inline-volume-hcz9v-my-volume". preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.. Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:17 +0000 UTC - event for inline-volume-hcz9v-my-volume: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "no-such-storage-class" not found Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:19 +0000 UTC - event for inline-volume-tester-7gw9x: {default-scheduler } FailedScheduling: 0/5 nodes are available: waiting for ephemeral volume controller to create the persistentvolumeclaim "inline-volume-tester-7gw9x-my-volume-0". preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.. Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:19 +0000 UTC - event for inline-volume-tester-7gw9x-my-volume-0: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-ephemeral-2577" or manually created by system administrator Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:21 +0000 UTC - event for inline-volume-tester-7gw9x: {default-scheduler } FailedScheduling: 0/5 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.. Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:23 +0000 UTC - event for inline-volume-tester-7gw9x-my-volume-0: {csi-hostpath-ephemeral-2577_csi-hostpathplugin-0_1a290a57-e23e-4580-b421-1d253a092aef } Provisioning: External provisioner is provisioning volume for claim "ephemeral-2577/inline-volume-tester-7gw9x-my-volume-0" Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:23 +0000 UTC - event for inline-volume-tester-7gw9x-my-volume-0: {csi-hostpath-ephemeral-2577_csi-hostpathplugin-0_1a290a57-e23e-4580-b421-1d253a092aef } ProvisioningFailed: failed to provision volume with StorageClass "ephemeral-25774lrcp": error generating accessibility requirements: no available topology found Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:26 +0000 UTC - event for inline-volume-tester-7gw9x-my-volume-0: {csi-hostpath-ephemeral-2577_csi-hostpathplugin-0_1a290a57-e23e-4580-b421-1d253a092aef } ProvisioningSucceeded: Successfully provisioned volume pvc-fa6fa87a-416b-4958-b243-a5c196029bef Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:27 +0000 UTC - event for inline-volume-tester-7gw9x: {default-scheduler } Scheduled: Successfully assigned ephemeral-2577/inline-volume-tester-7gw9x to i-03f9dde5751a3fd38 Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:27 +0000 UTC - event for inline-volume-tester-7gw9x: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-fa6fa87a-416b-4958-b243-a5c196029bef" Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:37 +0000 UTC - event for inline-volume-tester-7gw9x: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a50f647d39a5170897451a18dde0f2a0275ba3c2a395178c1ddd75cd333ebf0b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:11:07.997: INFO: At 2023-01-12 18:00:49 +0000 UTC - event for inline-volume-tester-7gw9x: {kubelet i-03f9dde5751a3fd38} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5b7bb9a8ec963d2878123d96acb4a2a33d41bf6fda6e80dd6055c5270733b952": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:11:07.997: INFO: At 2023-01-12 18:01:04 +0000 UTC - event for inline-volume-tester-7gw9x: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 12 18:11:07.997: INFO: At 2023-01-12 18:01:04 +0000 UTC - event for inline-volume-tester-7gw9x: {kubelet i-03f9dde5751a3fd38} Created: Created container csi-volume-tester Jan 12 18:11:07.997: INFO: At 2023-01-12 18:01:04 +0000 UTC - event for inline-volume-tester-7gw9x: {kubelet i-03f9dde5751a3fd38} Started: Started container csi-volume-tester Jan 12 18:11:07.997: INFO: At 2023-01-12 18:01:06 +0000 UTC - event for inline-volume-tester-7gw9x: {kubelet i-03f9dde5751a3fd38} Killing: Stopping container csi-volume-tester Jan 12 18:11:08.027: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:11:08.027: INFO: inline-volume-tester-7gw9x i-03f9dde5751a3fd38 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:27 +0000 UTC }] Jan 12 18:11:08.027: INFO: Jan 12 18:11:08.091: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:11:08.121: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 54145 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:08:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:11:08.121: INFO: Logging kubelet events for node i-01daa1f0ea8dcef5d ... skipping 180 lines ... Jan 12 18:11:09.278: INFO: Container hostpath ready: false, restart count 11 Jan 12 18:11:09.278: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 18:11:09.278: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 18:11:09.526: INFO: Latency metrics for node i-06e12471aa18677f8 [1mSTEP:[0m Waiting for namespaces [ephemeral-2577] to vanish [38;5;243m01/12/23 18:11:09.558[0m Jan 12 18:16:09.620: INFO: error deleting namespace ephemeral-2577: timed out waiting for the condition [1mSTEP:[0m uninstalling csi csi-hostpath driver [38;5;243m01/12/23 18:16:09.62[0m Jan 12 18:16:09.620: INFO: deleting *v1.ServiceAccount: ephemeral-2577-3838/csi-attacher Jan 12 18:16:09.651: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-2577 Jan 12 18:16:09.684: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-2577 Jan 12 18:16:09.716: INFO: deleting *v1.Role: ephemeral-2577-3838/external-attacher-cfg-ephemeral-2577 Jan 12 18:16:09.752: INFO: deleting *v1.RoleBinding: ephemeral-2577-3838/csi-attacher-role-cfg ... skipping 54 lines ... Jan 12 18:16:10.908: INFO: At 2023-01-12 18:00:22 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} Created: Created container csi-resizer Jan 12 18:16:10.908: INFO: At 2023-01-12 18:00:22 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} Started: Started container hostpath Jan 12 18:16:10.908: INFO: At 2023-01-12 18:00:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} Pulled: Container image "registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0" already present on machine Jan 12 18:16:10.908: INFO: At 2023-01-12 18:00:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} Created: Created container csi-snapshotter Jan 12 18:16:10.908: INFO: At 2023-01-12 18:00:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} Started: Started container csi-snapshotter Jan 12 18:16:10.908: INFO: At 2023-01-12 18:00:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} Started: Started container csi-resizer Jan 12 18:16:10.908: INFO: At 2023-01-12 18:00:36 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} Unhealthy: Liveness probe failed: Get "http://172.20.36.56:9898/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:16:10.908: INFO: At 2023-01-12 18:02:32 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-03f9dde5751a3fd38} BackOff: Back-off restarting failed container hostpath in pod csi-hostpathplugin-0_ephemeral-2577-3838(57d988a3-b4fd-4850-a519-3f0a5606f07e) Jan 12 18:16:10.938: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:16:10.938: INFO: csi-hostpathplugin-0 i-03f9dde5751a3fd38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:15:30 +0000 UTC ContainersNotReady containers with unready status: [hostpath]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:15:30 +0000 UTC ContainersNotReady containers with unready status: [hostpath]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:19 +0000 UTC }] Jan 12 18:16:10.938: INFO: Jan 12 18:16:11.207: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:16:11.252: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 55493 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} ... skipping 170 lines ... [DeferCleanup (Each)] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral dump namespaces | framework.go:196 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/12/23 18:16:18.825[0m [1mSTEP:[0m Collecting events from namespace "ephemeral-2577". [38;5;243m01/12/23 18:16:18.825[0m [1mSTEP:[0m Found 0 events. [38;5;243m01/12/23 18:16:18.854[0m Jan 12 18:16:18.883: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:16:18.883: INFO: inline-volume-tester-7gw9x i-03f9dde5751a3fd38 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:00:27 +0000 UTC }] Jan 12 18:16:18.883: INFO: Jan 12 18:16:18.946: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:16:18.975: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 55493 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:16:18.976: INFO: Logging kubelet events for node i-01daa1f0ea8dcef5d ... skipping 238 lines ... capacity: storage: 1Mi phase: Bound to be empty[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/storage/testsuites/ephemeral.go:431[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [2066.715 seconds][0m [sig-storage] CSI Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [38;5;243mtest/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [38;5;243mtest/e2e/storage/framework/testsuite.go:50[0m ... skipping 346 lines ... Jan 12 17:51:06.152: INFO: Pod "inline-volume-tester-wm68r": Phase="Running", Reason="", readiness=true. Elapsed: 9m8.059382151s Jan 12 17:51:06.152: INFO: Pod "inline-volume-tester-wm68r" satisfied condition "running" Jan 12 17:51:06.186: INFO: Running volume expansion checks inline-volume-tester-wm68r [1mSTEP:[0m Expanding current pvc [38;5;243m01/12/23 17:51:06.215[0m Jan 12 17:51:06.215: INFO: currentPvcSize 1Mi, requested new size 1025Mi [1mSTEP:[0m Waiting for cloudprovider resize to finish [38;5;243m01/12/23 17:51:06.285[0m Jan 12 18:01:06.372: INFO: Unexpected error: While waiting for pvc resize to finish: <*errors.errorString | 0xc0011095d0>: { s: "error while waiting for controller resize to finish: timed out waiting for the condition", } Jan 12 18:01:06.372: FAIL: While waiting for pvc resize to finish: error while waiting for controller resize to finish: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*ephemeralTestSuite).DefineTests.func5.1(0xc003844900) test/e2e/storage/testsuites/ephemeral.go:264 +0x8c5 k8s.io/kubernetes/test/e2e/storage/testsuites.EphemeralTest.TestEphemeral({{0x801e128, 0xc001e83380}, 0xc00107f500, {0xc003b1bd60, 0xe}, {0x0, 0x0}, 0xc0018691d0, {{0xc005687218, 0x13}, ...}, ...}, ...) test/e2e/storage/testsuites/ephemeral.go:421 +0x755 ... skipping 76 lines ... Jan 12 18:11:08.535: INFO: At 2023-01-12 17:41:54 +0000 UTC - event for inline-volume-9c458: {default-scheduler } FailedScheduling: 0/5 nodes are available: waiting for ephemeral volume controller to create the persistentvolumeclaim "inline-volume-9c458-my-volume". preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.. Jan 12 18:11:08.535: INFO: At 2023-01-12 17:41:54 +0000 UTC - event for inline-volume-9c458-my-volume: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "no-such-storage-class" not found Jan 12 18:11:08.535: INFO: At 2023-01-12 17:41:58 +0000 UTC - event for inline-volume-tester-wm68r: {default-scheduler } FailedScheduling: 0/5 nodes are available: waiting for ephemeral volume controller to create the persistentvolumeclaim "inline-volume-tester-wm68r-my-volume-0". preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.. Jan 12 18:11:08.535: INFO: At 2023-01-12 17:41:58 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {persistentvolume-controller } WaitForPodScheduled: waiting for pod inline-volume-tester-wm68r to be scheduled Jan 12 18:11:08.535: INFO: At 2023-01-12 17:41:59 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-ephemeral-4377" or manually created by system administrator Jan 12 18:11:08.535: INFO: At 2023-01-12 17:49:47 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {csi-hostpath-ephemeral-4377_csi-hostpathplugin-0_4553c032-5c07-4f43-b66a-36bc26ae45be } Provisioning: External provisioner is provisioning volume for claim "ephemeral-4377/inline-volume-tester-wm68r-my-volume-0" Jan 12 18:11:08.535: INFO: At 2023-01-12 17:49:47 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {csi-hostpath-ephemeral-4377_csi-hostpathplugin-0_4553c032-5c07-4f43-b66a-36bc26ae45be } ProvisioningFailed: failed to provision volume with StorageClass "ephemeral-4377pr4l8": error generating accessibility requirements: no topology key found on CSINode i-06e12471aa18677f8 Jan 12 18:11:08.535: INFO: At 2023-01-12 17:49:50 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {csi-hostpath-ephemeral-4377_csi-hostpathplugin-0_4553c032-5c07-4f43-b66a-36bc26ae45be } ProvisioningSucceeded: Successfully provisioned volume pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2 Jan 12 18:11:08.535: INFO: At 2023-01-12 17:49:51 +0000 UTC - event for inline-volume-tester-wm68r: {default-scheduler } Scheduled: Successfully assigned ephemeral-4377/inline-volume-tester-wm68r to i-06e12471aa18677f8 Jan 12 18:11:08.535: INFO: At 2023-01-12 17:49:52 +0000 UTC - event for inline-volume-tester-wm68r: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2" Jan 12 18:11:08.535: INFO: At 2023-01-12 17:50:07 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8090950ddf37a086be7ac6b1bad7bb1b61ad31b14c6f6bb7afb0359057898bef": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:11:08.535: INFO: At 2023-01-12 17:50:17 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "46b61f1b9de68e6b68f0444aecae7acebdadcec11673cef375a04a1bc3bbc333": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:11:08.535: INFO: At 2023-01-12 17:50:29 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a16eedad6ce3bbfa8bc977bc8db8369be292d46ad120e1f4852c1c193e88ca6f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:11:08.535: INFO: At 2023-01-12 17:50:42 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "47533dadec88fec152fefc758dc71a036fbc2f9cc0967b43ebbb90005b8fee59": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:11:08.535: INFO: At 2023-01-12 17:50:58 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 12 18:11:08.535: INFO: At 2023-01-12 17:50:58 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} Created: Created container csi-volume-tester Jan 12 18:11:08.535: INFO: At 2023-01-12 17:50:58 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} Started: Started container csi-volume-tester Jan 12 18:11:08.535: INFO: At 2023-01-12 17:51:06 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {volume_expand } ExternalExpanding: Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC. Jan 12 18:11:08.535: INFO: At 2023-01-12 17:51:06 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } Resizing: External resizer is resizing volume pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2 Jan 12 18:11:08.535: INFO: At 2023-01-12 17:51:06 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } VolumeResizeFailed: resize volume "pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2" by resizer "csi-hostpath-ephemeral-4377" failed: rpc error: code = NotFound desc = volume id 8312096a-92a1-11ed-8116-c27f2a41385d does not exist in the volumes list Jan 12 18:11:08.535: INFO: At 2023-01-12 17:52:43 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } VolumeResizeFailed: resize volume "pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2" by resizer "csi-hostpath-ephemeral-4377" failed: rpc error: code = NotFound desc = volume id 8312096a-92a1-11ed-8116-c27f2a41385d does not exist in the volumes list Jan 12 18:11:08.535: INFO: At 2023-01-12 17:52:43 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } Resizing: External resizer is resizing volume pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2 Jan 12 18:11:08.535: INFO: At 2023-01-12 17:54:35 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } VolumeResizeFailed: resize volume "pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2" by resizer "csi-hostpath-ephemeral-4377" failed: rpc error: code = NotFound desc = volume id 8312096a-92a1-11ed-8116-c27f2a41385d does not exist in the volumes list Jan 12 18:11:08.535: INFO: At 2023-01-12 17:54:35 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } Resizing: External resizer is resizing volume pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2 Jan 12 18:11:08.535: INFO: At 2023-01-12 17:57:54 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } Resizing: External resizer is resizing volume pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2 Jan 12 18:11:08.535: INFO: At 2023-01-12 17:57:54 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } VolumeResizeFailed: resize volume "pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2" by resizer "csi-hostpath-ephemeral-4377" failed: rpc error: code = NotFound desc = volume id 8312096a-92a1-11ed-8116-c27f2a41385d does not exist in the volumes list Jan 12 18:11:08.535: INFO: At 2023-01-12 18:01:06 +0000 UTC - event for inline-volume-tester-wm68r: {kubelet i-06e12471aa18677f8} Killing: Stopping container csi-volume-tester Jan 12 18:11:08.535: INFO: At 2023-01-12 18:03:25 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } Resizing: External resizer is resizing volume pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2 Jan 12 18:11:08.535: INFO: At 2023-01-12 18:03:25 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } VolumeResizeFailed: resize volume "pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2" by resizer "csi-hostpath-ephemeral-4377" failed: rpc error: code = NotFound desc = volume id 8312096a-92a1-11ed-8116-c27f2a41385d does not exist in the volumes list Jan 12 18:11:08.535: INFO: At 2023-01-12 18:09:24 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } Resizing: External resizer is resizing volume pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2 Jan 12 18:11:08.535: INFO: At 2023-01-12 18:09:24 +0000 UTC - event for inline-volume-tester-wm68r-my-volume-0: {external-resizer csi-hostpath-ephemeral-4377 } VolumeResizeFailed: resize volume "pvc-562a1895-bb8d-4986-ac25-2125cfa7c7c2" by resizer "csi-hostpath-ephemeral-4377" failed: rpc error: code = NotFound desc = volume id 8312096a-92a1-11ed-8116-c27f2a41385d does not exist in the volumes list Jan 12 18:11:08.565: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:11:08.565: INFO: inline-volume-tester-wm68r i-06e12471aa18677f8 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:51 +0000 UTC }] Jan 12 18:11:08.565: INFO: Jan 12 18:11:08.626: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:11:08.656: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 54145 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:08:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:11:08.656: INFO: Logging kubelet events for node i-01daa1f0ea8dcef5d ... skipping 180 lines ... Jan 12 18:11:09.828: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 18:11:09.828: INFO: pod-5f02149a-328d-4d04-b844-35c71efb4583 started at 2023-01-12 17:27:17 +0000 UTC (0+1 container statuses recorded) Jan 12 18:11:09.828: INFO: Container write-pod ready: false, restart count 0 Jan 12 18:11:10.116: INFO: Latency metrics for node i-06e12471aa18677f8 [1mSTEP:[0m Waiting for namespaces [ephemeral-4377] to vanish [38;5;243m01/12/23 18:11:10.147[0m Jan 12 18:16:10.209: INFO: error deleting namespace ephemeral-4377: timed out waiting for the condition [1mSTEP:[0m uninstalling csi csi-hostpath driver [38;5;243m01/12/23 18:16:10.209[0m Jan 12 18:16:10.209: INFO: deleting *v1.ServiceAccount: ephemeral-4377-2149/csi-attacher Jan 12 18:16:10.241: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-4377 Jan 12 18:16:10.272: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-4377 Jan 12 18:16:10.305: INFO: deleting *v1.Role: ephemeral-4377-2149/external-attacher-cfg-ephemeral-4377 Jan 12 18:16:10.338: INFO: deleting *v1.RoleBinding: ephemeral-4377-2149/csi-attacher-role-cfg ... skipping 32 lines ... Jan 12 18:16:11.410: INFO: deleting *v1.StatefulSet: ephemeral-4377-2149/csi-hostpathplugin Jan 12 18:16:11.445: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-4377 [1mSTEP:[0m deleting the driver namespace: ephemeral-4377-2149 [38;5;243m01/12/23 18:16:11.477[0m [1mSTEP:[0m Collecting events from namespace "ephemeral-4377-2149". [38;5;243m01/12/23 18:16:11.477[0m [1mSTEP:[0m Found 13 events. [38;5;243m01/12/23 18:16:11.521[0m Jan 12 18:16:11.522: INFO: At 2023-01-12 17:41:58 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Jan 12 18:16:11.522: INFO: At 2023-01-12 17:41:58 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3c6c436ba31e051ec290638ee8b7e6711151a7aee83c92195b7585f40ca6b91d": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:41:58 +0000 UTC - event for csi-hostpathplugin-0: {default-scheduler } Scheduled: Successfully assigned ephemeral-4377-2149/csi-hostpathplugin-0 to i-06e12471aa18677f8 Jan 12 18:16:11.522: INFO: At 2023-01-12 17:42:12 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8eb0d09546f0d3308dfef49a904d668a96df9615f326da259ce808dd5183e99b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:42:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9670e68f1fc125eb5e13882c9f4ba59f13c083c2b3ecd06d5d9bb5389ab6b2f9": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:42:37 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1b4e55d087c986451b1d38c0520bcff0753dc7ef8179406fbea1cda258330d77": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:42:50 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "014d27a8c6ec9fb51b814a6a7d54afb2acd76e2420ccc25cee96b1422f777423": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:43:03 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5377a4a9928b1195cf7405143f014e7c8df077599fa4791239e5bc466ce26faf": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:43:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7722ddf9e061ac071277b272202be213bf6e081f9e008a0617c55df79937cced": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:43:32 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0dc43d370130f3abe2d11624037f64df362f7f413409f00d6ba08b1b926d77ff": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:43:44 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5815ec3ca55c0ff0d2caf4c863dbd4d9daa47b02c27a875cd8385a6fdde88968": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:43:55 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "08e72d4a0a2a01271d4e8d723d086d874c9defb5e4dd8e73787d3787a29ad1e2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 18:16:11.522: INFO: At 2023-01-12 17:51:55 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} BackOff: Back-off restarting failed container hostpath in pod csi-hostpathplugin-0_ephemeral-4377-2149(422f052b-d92d-4d43-a848-18ff298e9033) Jan 12 18:16:11.551: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:16:11.551: INFO: csi-hostpathplugin-0 i-06e12471aa18677f8 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:41:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:15:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:15:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:41:58 +0000 UTC }] Jan 12 18:16:11.551: INFO: Jan 12 18:16:11.551: INFO: csi-hostpathplugin-0[ephemeral-4377-2149].container[csi-resizer]=Lost connection to CSI driver, exiting Jan 12 18:16:11.842: INFO: Logging node info for node i-01daa1f0ea8dcef5d ... skipping 163 lines ... [DeferCleanup (Each)] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral dump namespaces | framework.go:196 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/12/23 18:16:19.259[0m [1mSTEP:[0m Collecting events from namespace "ephemeral-4377". [38;5;243m01/12/23 18:16:19.259[0m [1mSTEP:[0m Found 0 events. [38;5;243m01/12/23 18:16:19.288[0m Jan 12 18:16:19.317: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:16:19.317: INFO: inline-volume-tester-wm68r i-06e12471aa18677f8 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:01:37 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:49:51 +0000 UTC }] Jan 12 18:16:19.317: INFO: Jan 12 18:16:19.378: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:16:19.407: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 55493 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:16:19.408: INFO: Logging kubelet events for node i-01daa1f0ea8dcef5d ... skipping 156 lines ... Latency metrics for node i-06e12471aa18677f8 [DeferCleanup (Each)] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "ephemeral-4377" for this suite. [38;5;243m01/12/23 18:16:20.825[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:01:06.372: While waiting for pvc resize to finish: error while waiting for controller resize to finish: timed out waiting for the condition[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/storage/testsuites/ephemeral.go:264[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [1540.334 seconds][0m [sig-storage] CSI Volumes [38;5;243mtest/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [38;5;243mtest/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [38;5;243mtest/e2e/storage/framework/testsuite.go:50[0m ... skipping 219 lines ... Jan 12 18:12:44.193: INFO: PersistentVolume pvc-8ff4862e-dcc4-424d-8e21-093581c3543d found and phase=Bound (4m31.750901408s) Jan 12 18:12:49.225: INFO: PersistentVolume pvc-8ff4862e-dcc4-424d-8e21-093581c3543d found and phase=Bound (4m36.78233865s) Jan 12 18:12:54.257: INFO: PersistentVolume pvc-8ff4862e-dcc4-424d-8e21-093581c3543d found and phase=Bound (4m41.814625172s) Jan 12 18:12:59.288: INFO: PersistentVolume pvc-8ff4862e-dcc4-424d-8e21-093581c3543d found and phase=Bound (4m46.845513411s) Jan 12 18:13:04.320: INFO: PersistentVolume pvc-8ff4862e-dcc4-424d-8e21-093581c3543d found and phase=Bound (4m51.877352094s) Jan 12 18:13:09.352: INFO: PersistentVolume pvc-8ff4862e-dcc4-424d-8e21-093581c3543d found and phase=Bound (4m56.90996678s) Jan 12 18:13:14.387: FAIL: no dangling PVCs Expected <[]v1.PersistentVolumeClaim | len:2, cap:2>: - metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" ... skipping 203 lines ... Jan 12 18:13:14.483: INFO: At 2023-01-12 17:53:07 +0000 UTC - event for inline-volume-tester2-qj9zd: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 12 18:13:14.483: INFO: At 2023-01-12 17:53:07 +0000 UTC - event for inline-volume-tester2-qj9zd: {kubelet i-06e12471aa18677f8} Created: Created container csi-volume-tester Jan 12 18:13:14.483: INFO: At 2023-01-12 17:53:07 +0000 UTC - event for inline-volume-tester2-qj9zd: {kubelet i-06e12471aa18677f8} Started: Started container csi-volume-tester Jan 12 18:13:14.483: INFO: At 2023-01-12 17:53:10 +0000 UTC - event for inline-volume-tester2-qj9zd: {kubelet i-06e12471aa18677f8} Killing: Stopping container csi-volume-tester Jan 12 18:13:14.483: INFO: At 2023-01-12 18:03:12 +0000 UTC - event for inline-volume-tester-jp6vn: {kubelet i-06e12471aa18677f8} Killing: Stopping container csi-volume-tester Jan 12 18:13:14.514: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:13:14.514: INFO: inline-volume-tester-jp6vn i-06e12471aa18677f8 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:52:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:43 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:43 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:52:53 +0000 UTC }] Jan 12 18:13:14.514: INFO: inline-volume-tester2-qj9zd i-06e12471aa18677f8 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:41 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:41 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:01 +0000 UTC }] Jan 12 18:13:14.515: INFO: Jan 12 18:13:14.615: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:13:14.646: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 54145 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:08:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:08:24 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:13:14.646: INFO: Logging kubelet events for node i-01daa1f0ea8dcef5d ... skipping 178 lines ... Jan 12 18:13:15.984: INFO: Container hostpath ready: false, restart count 11 Jan 12 18:13:15.984: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 18:13:15.984: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 18:13:16.310: INFO: Latency metrics for node i-06e12471aa18677f8 [1mSTEP:[0m Waiting for namespaces [ephemeral-8778] to vanish [38;5;243m01/12/23 18:13:16.343[0m Jan 12 18:18:16.407: INFO: error deleting namespace ephemeral-8778: timed out waiting for the condition [1mSTEP:[0m uninstalling csi csi-hostpath driver [38;5;243m01/12/23 18:18:16.407[0m Jan 12 18:18:16.407: INFO: deleting *v1.ServiceAccount: ephemeral-8778-3846/csi-attacher Jan 12 18:18:16.440: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-8778 Jan 12 18:18:16.473: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-8778 Jan 12 18:18:16.505: INFO: deleting *v1.Role: ephemeral-8778-3846/external-attacher-cfg-ephemeral-8778 Jan 12 18:18:16.538: INFO: deleting *v1.RoleBinding: ephemeral-8778-3846/csi-attacher-role-cfg ... skipping 54 lines ... Jan 12 18:18:17.730: INFO: At 2023-01-12 17:52:52 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} Created: Created container csi-resizer Jan 12 18:18:17.730: INFO: At 2023-01-12 17:52:52 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} Started: Started container csi-resizer Jan 12 18:18:17.730: INFO: At 2023-01-12 17:52:52 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} Pulled: Container image "registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0" already present on machine Jan 12 18:18:17.730: INFO: At 2023-01-12 17:52:52 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} Created: Created container csi-snapshotter Jan 12 18:18:17.730: INFO: At 2023-01-12 17:52:52 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} Started: Started container csi-snapshotter Jan 12 18:18:17.730: INFO: At 2023-01-12 17:52:52 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} Created: Created container node-driver-registrar Jan 12 18:18:17.730: INFO: At 2023-01-12 17:53:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} Unhealthy: Liveness probe failed: Get "http://172.20.50.83:9898/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 12 18:18:17.730: INFO: At 2023-01-12 17:55:00 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-06e12471aa18677f8} BackOff: Back-off restarting failed container hostpath in pod csi-hostpathplugin-0_ephemeral-8778-3846(6b4c2e71-11e6-40bb-80e3-12839fdb9c9c) Jan 12 18:18:17.762: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:18:17.762: INFO: csi-hostpathplugin-0 i-06e12471aa18677f8 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:52:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:18:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:18:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:52:51 +0000 UTC }] Jan 12 18:18:17.762: INFO: Jan 12 18:18:18.073: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:18:18.104: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 55493 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} ... skipping 154 lines ... [DeferCleanup (Each)] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral dump namespaces | framework.go:196 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/12/23 18:18:25.718[0m [1mSTEP:[0m Collecting events from namespace "ephemeral-8778". [38;5;243m01/12/23 18:18:25.718[0m [1mSTEP:[0m Found 0 events. [38;5;243m01/12/23 18:18:25.749[0m Jan 12 18:18:25.780: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:18:25.780: INFO: inline-volume-tester-jp6vn i-06e12471aa18677f8 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:52:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:43 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 18:03:43 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:52:53 +0000 UTC }] Jan 12 18:18:25.780: INFO: inline-volume-tester2-qj9zd i-06e12471aa18677f8 Failed 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:41 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:41 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 17:53:01 +0000 UTC }] Jan 12 18:18:25.780: INFO: Jan 12 18:18:25.879: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:18:25.911: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 55493 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:13:30 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:18:25.911: INFO: Logging kubelet events for node i-01daa1f0ea8dcef5d ... skipping 326 lines ... capacity: storage: 1Mi phase: Bound to be empty[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/storage/testsuites/ephemeral.go:431[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [1032.975 seconds][0m [sig-apps] StatefulSet [38;5;243mtest/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [38;5;243mtest/e2e/apps/statefulset.go:103[0m [38;5;9m[1m[It] should adopt matching orphans and release non-matching pods[0m [38;5;243mtest/e2e/apps/statefulset.go:173[0m ... skipping 78 lines ... Jan 12 18:14:03.222: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 12 18:14:13.222: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 12 18:14:23.222: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 12 18:14:33.222: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 12 18:14:43.222: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 12 18:14:43.252: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 12 18:14:43.252: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801e128?, 0xc0046e09c0}, 0x1, 0x0, 0xc002f99400) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x801e128, 0xc0046e09c0}, 0xc002f99400) test/e2e/framework/statefulset/wait.go:179 +0xab k8s.io/kubernetes/test/e2e/apps.glob..func10.2.4() test/e2e/apps/statefulset.go:187 +0x239 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Jan 12 18:14:43.284: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2129 describe po ss-0' Jan 12 18:14:43.607: INFO: stderr: "" Jan 12 18:14:43.607: INFO: stdout: "Name: ss-0\nNamespace: statefulset-2129\nPriority: 0\nService Account: default\nNode: i-06a506de3e6c2b98a/172.20.33.153\nStart Time: Thu, 12 Jan 2023 18:04:47 +0000\nLabels: baz=blah\n controller-revision-hash=ss-b9bbc7d7b\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Pending\nIP: \nIPs: <none>\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: \n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: \n Port: <none>\n Host Port: <none>\n State: Waiting\n Reason: ContainerCreating\n Ready: False\n Restart Count: 0\n Readiness: exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /data/ from datadir (rw)\n /home from home (rw)\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5bhfg (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n datadir:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: datadir-ss-0\n ReadOnly: false\n home:\n Type: HostPath (bare host directory volume)\n Path: /tmp/home\n HostPathType: \n kube-api-access-5bhfg:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9m56s default-scheduler Successfully assigned statefulset-2129/ss-0 to i-06a506de3e6c2b98a\n Normal SuccessfulAttachVolume 9m54s attachdetach-controller AttachVolume.Attach succeeded for volume \"pvc-aabaef16-6c71-410a-ac92-20a70937eaf5\"\n Warning FailedMount 7m53s kubelet Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[kube-api-access-5bhfg datadir home]: timed out waiting for the condition\n Warning FailedMount 97s (x12 over 9m52s) kubelet MountVolume.MountDevice failed for volume \"pvc-aabaef16-6c71-410a-ac92-20a70937eaf5\" : rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock: connect: connection refused\"\n Warning FailedMount 66s (x3 over 5m35s) kubelet Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[datadir home kube-api-access-5bhfg]: timed out waiting for the condition\n" Jan 12 18:14:43.608: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-2129 Priority: 0 Service Account: default ... skipping 53 lines ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m56s default-scheduler Successfully assigned statefulset-2129/ss-0 to i-06a506de3e6c2b98a Normal SuccessfulAttachVolume 9m54s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-aabaef16-6c71-410a-ac92-20a70937eaf5" Warning FailedMount 7m53s kubelet Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[kube-api-access-5bhfg datadir home]: timed out waiting for the condition Warning FailedMount 97s (x12 over 9m52s) kubelet MountVolume.MountDevice failed for volume "pvc-aabaef16-6c71-410a-ac92-20a70937eaf5" : rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock: connect: connection refused" Warning FailedMount 66s (x3 over 5m35s) kubelet Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[datadir home kube-api-access-5bhfg]: timed out waiting for the condition Jan 12 18:14:43.608: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2129 logs ss-0 --tail=100' Jan 12 18:14:43.859: INFO: rc: 1 Jan 12 18:14:43.859: INFO: Last 100 log lines of ss-0: ... skipping 22 lines ... Jan 12 18:21:54.269: INFO: At 2023-01-12 18:04:43 +0000 UTC - event for datadir-ss-0: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Jan 12 18:21:54.269: INFO: At 2023-01-12 18:04:43 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success Jan 12 18:21:54.269: INFO: At 2023-01-12 18:04:43 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Jan 12 18:21:54.269: INFO: At 2023-01-12 18:04:46 +0000 UTC - event for datadir-ss-0: {ebs.csi.aws.com_i-064d67fb1979934c5.ec2.internal_08c3cee3-26fd-4e93-9225-337adb67e72e } ProvisioningSucceeded: Successfully provisioned volume pvc-aabaef16-6c71-410a-ac92-20a70937eaf5 Jan 12 18:21:54.269: INFO: At 2023-01-12 18:04:47 +0000 UTC - event for ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-2129/ss-0 to i-06a506de3e6c2b98a Jan 12 18:21:54.269: INFO: At 2023-01-12 18:04:49 +0000 UTC - event for ss-0: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-aabaef16-6c71-410a-ac92-20a70937eaf5" Jan 12 18:21:54.269: INFO: At 2023-01-12 18:04:51 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} FailedMount: MountVolume.MountDevice failed for volume "pvc-aabaef16-6c71-410a-ac92-20a70937eaf5" : rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock: connect: connection refused" Jan 12 18:21:54.269: INFO: At 2023-01-12 18:06:50 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} FailedMount: Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[kube-api-access-5bhfg datadir home]: timed out waiting for the condition Jan 12 18:21:54.269: INFO: At 2023-01-12 18:09:08 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} FailedMount: Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[datadir home kube-api-access-5bhfg]: timed out waiting for the condition Jan 12 18:21:54.269: INFO: At 2023-01-12 18:14:43 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Jan 12 18:21:54.269: INFO: At 2023-01-12 18:15:09 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Started: Started container webserver Jan 12 18:21:54.269: INFO: At 2023-01-12 18:15:09 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Created: Created container webserver Jan 12 18:21:54.269: INFO: At 2023-01-12 18:15:09 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Jan 12 18:21:54.269: INFO: At 2023-01-12 18:15:10 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Killing: Stopping container webserver Jan 12 18:21:54.269: INFO: At 2023-01-12 18:15:10 +0000 UTC - event for ss-0: {kubelet i-06a506de3e6c2b98a} Unhealthy: Readiness probe failed: Jan 12 18:21:54.299: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 18:21:54.299: INFO: Jan 12 18:21:54.331: INFO: Logging node info for node i-01daa1f0ea8dcef5d Jan 12 18:21:54.362: INFO: Node Info: &Node{ObjectMeta:{i-01daa1f0ea8dcef5d faddcd1a-1b1c-4996-a8c4-11530fac8916 56885 0 2023-01-12 17:19:54 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-01daa1f0ea8dcef5d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-01daa1f0ea8dcef5d topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.141 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01daa1f0ea8dcef5d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-12 17:19:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-12 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 18:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-01daa1f0ea8dcef5d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{51527004160 0} {<nil>} 50319340Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4050644992 0} {<nil>} 3955708Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{46374303668 0} {<nil>} 46374303668 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3945787392 0} {<nil>} 3853308Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 18:18:35 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 18:18:35 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 18:18:35 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 18:18:35 +0000 UTC,LastTransitionTime:2023-01-12 17:21:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.141,},NodeAddress{Type:ExternalIP,Address:107.20.47.139,},NodeAddress{Type:InternalDNS,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:Hostname,Address:i-01daa1f0ea8dcef5d.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-107-20-47-139.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ea01165b16f24244c2446f965216,SystemUUID:ec28ea01-165b-16f2-4244-c2446f965216,BootID:476f07a5-2a72-457b-b832-d96f60ccaf7d,KernelVersion:5.10.157-139.675.amzn2.x86_64,OSImage:Amazon Linux 2,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 18:21:54.362: INFO: ... skipping 147 lines ... Latency metrics for node i-06e12471aa18677f8 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 [1mSTEP:[0m Destroying namespace "statefulset-2129" for this suite. [38;5;243m01/12/23 18:21:55.783[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 12 18:14:43.252: Failed waiting for pods to enter running: timed out waiting for the condition[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1mtest/e2e/framework/statefulset/wait.go:58[0m [38;5;243m------------------------------[0m [1m[38;5;9mGinkgo timed out waiting for all parallel procs to report back[0m [38;5;243mTest suite:[0m e2e (./_rundir/554861e1-929c-11ed-901d-e2a8de243d6a) ... skipping 11 lines ... Jan 12 17:23:34.600: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.602092 6626 e2e.go:126] Starting e2e run "f94d83e8-a043-4623-9f5c-e22eda61200b" on Ginkgo node 1 [1mOutput from proc 2:[0m Jan 12 17:23:34.800: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.801037 6627 e2e.go:126] Starting e2e run "8d0b32f3-f796-4e63-9836-499fa204d7f4" on Ginkgo node 2 --- FAIL: TestE2E (2669.21s) FAIL [1mOutput from proc 3:[0m Jan 12 17:23:34.823: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.824549 6629 e2e.go:126] Starting e2e run "c1e8d852-0b88-4eab-b9bb-70ee33962dc0" on Ginkgo node 3 --- FAIL: TestE2E (3018.92s) FAIL [1mOutput from proc 4:[0m Jan 12 17:23:34.906: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.911509 6631 e2e.go:126] Starting e2e run "8428eb3c-6f7c-4410-9808-806d351ea58d" on Ginkgo node 4 --- FAIL: TestE2E (3084.58s) FAIL [1mOutput from proc 5:[0m Jan 12 17:23:34.888: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.888470 6633 e2e.go:126] Starting e2e run "6ee21f16-c091-4dca-8d0c-d0d4e3abdf0d" on Ginkgo node 5 --- FAIL: TestE2E (2916.83s) FAIL [1mOutput from proc 6:[0m Jan 12 17:23:34.768: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.779099 6635 e2e.go:126] Starting e2e run "a138a429-50d9-4f42-b4d6-a26b6ae49ec6" on Ginkgo node 6 --- FAIL: TestE2E (2699.13s) FAIL [1mOutput from proc 7:[0m Jan 12 17:23:35.125: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.126401 6641 e2e.go:126] Starting e2e run "86267a5f-d82c-4515-a11b-4c409aab4b52" on Ginkgo node 7 --- FAIL: TestE2E (2826.88s) FAIL [1mOutput from proc 8:[0m Jan 12 17:23:34.882: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.885708 6667 e2e.go:126] Starting e2e run "dd0fe782-9a62-4514-b50c-79013da9d67d" on Ginkgo node 8 --- FAIL: TestE2E (2821.85s) FAIL [1mOutput from proc 9:[0m Jan 12 17:23:35.168: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.168571 6682 e2e.go:126] Starting e2e run "b7cb5afd-36b7-4b37-91e4-837b1d8dcc42" on Ginkgo node 9 --- FAIL: TestE2E (3500.68s) FAIL [1mOutput from proc 10:[0m Jan 12 17:23:34.880: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.881089 6686 e2e.go:126] Starting e2e run "3776a018-320a-4577-8d3e-310f835938e9" on Ginkgo node 10 --- FAIL: TestE2E (2674.39s) FAIL [1mOutput from proc 11:[0m Jan 12 17:23:35.125: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.161312 6688 e2e.go:126] Starting e2e run "05b3e058-fbef-4bcf-9fa7-0c06b4f7918c" on Ginkgo node 11 --- FAIL: TestE2E (3014.30s) FAIL [1mOutput from proc 12:[0m Jan 12 17:23:35.204: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.205000 6694 e2e.go:126] Starting e2e run "872a8388-f212-4c15-aef4-cc0f97b37f65" on Ginkgo node 12 --- FAIL: TestE2E (2690.31s) FAIL [1mOutput from proc 13:[0m Jan 12 17:23:35.194: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.197553 6702 e2e.go:126] Starting e2e run "140a5a01-7369-4901-9d15-f8da8bceec37" on Ginkgo node 13 --- FAIL: TestE2E (2725.87s) FAIL [1mOutput from proc 14:[0m Jan 12 17:23:35.481: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.483854 6711 e2e.go:126] Starting e2e run "ed35c548-117b-419a-baf0-f532e56954ef" on Ginkgo node 14 [1mOutput from proc 15:[0m Jan 12 17:23:35.065: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.066339 6724 e2e.go:126] Starting e2e run "8c4bf213-723d-4129-a8ad-66c50418d9b0" on Ginkgo node 15 --- FAIL: TestE2E (2691.43s) FAIL [1mOutput from proc 16:[0m Jan 12 17:23:34.981: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:34.983051 6729 e2e.go:126] Starting e2e run "0a87b185-eee9-4126-8ba9-abd542a364a2" on Ginkgo node 16 --- FAIL: TestE2E (3165.41s) FAIL [1mOutput from proc 17:[0m Jan 12 17:23:35.060: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.061345 6742 e2e.go:126] Starting e2e run "e19c79d9-6bae-4912-bc40-89e30dc288a1" on Ginkgo node 17 --- FAIL: TestE2E (2682.61s) FAIL [1mOutput from proc 18:[0m Jan 12 17:23:35.213: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.214409 6752 e2e.go:126] Starting e2e run "6c0c115c-943b-4ce5-a3e0-05aada0a7816" on Ginkgo node 18 --- FAIL: TestE2E (2695.83s) FAIL [1mOutput from proc 19:[0m Jan 12 17:23:35.677: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.677918 6776 e2e.go:126] Starting e2e run "b1d3310f-8a18-46fb-b7e6-956e514d36b7" on Ginkgo node 19 --- FAIL: TestE2E (2693.40s) FAIL [1mOutput from proc 20:[0m Jan 12 17:23:35.621: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.622374 6781 e2e.go:126] Starting e2e run "3cad1d8d-a38a-4af6-8b7a-3f2e770075de" on Ginkgo node 20 --- FAIL: TestE2E (3165.25s) FAIL [1mOutput from proc 21:[0m Jan 12 17:23:35.524: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.528510 6791 e2e.go:126] Starting e2e run "90f89579-5057-410a-95e4-41ff68241bee" on Ginkgo node 21 --- FAIL: TestE2E (2679.36s) FAIL [1mOutput from proc 22:[0m Jan 12 17:23:35.841: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.842217 6798 e2e.go:126] Starting e2e run "daf6c5a5-b4a5-4c24-a95b-5e0ef67c7def" on Ginkgo node 22 --- FAIL: TestE2E (2832.68s) FAIL [1mOutput from proc 23:[0m Jan 12 17:23:35.741: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.742458 6811 e2e.go:126] Starting e2e run "6226651a-a936-42f1-b909-da3d8e638d3d" on Ginkgo node 23 --- FAIL: TestE2E (3291.66s) FAIL [1mOutput from proc 24:[0m Jan 12 17:23:35.429: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.430235 6823 e2e.go:126] Starting e2e run "c0a0c216-730c-4e25-be38-4980a1f69b46" on Ginkgo node 24 --- FAIL: TestE2E (2673.51s) FAIL [1mOutput from proc 25:[0m Jan 12 17:23:35.483: INFO: Driver loaded from path [/home/prow/go/src/k8s.io/kops/tests/e2e/csi-manifests/aws-ebs/driver.yaml]: &{DriverInfo:{Name:ebs.csi.aws.com InTreePluginName: FeatureTag: MaxFileSize:0 SupportedSizeRange:{Max:16Ti Min:1Gi} SupportedFsType:map[:{} ext4:{} xfs:{}] SupportedMountOption:map[dirsync:{}] RequiredMountOption:map[] Capabilities:map[block:true controllerExpansion:true exec:true fsGroup:true multipods:true nodeExpansion:true offlineExpansion:true onlineExpansion:true persistence:true pvcDataSource:false snapshotDataSource:true topology:true volumeLimits:true] RequiredAccessModes:[] TopologyKeys:[topology.ebs.csi.aws.com/zone] NumAllowedTopologies:0 StressTestOptions:<nil> VolumeSnapshotStressTestOptions:<nil> PerformanceTestOptions:<nil>} StorageClass:{FromName:false FromFile:tests/e2e/csi-manifests/aws-ebs/sc.yaml FromExistingClassName:} SnapshotClass:{FromName:false FromFile: FromExistingClassName:} InlineVolumes:[] ClientNodeName: Timeouts:map[]} I0112 17:23:35.492478 6825 e2e.go:126] Starting e2e run "50288132-5b83-4ab9-86f0-70a3c1d081e4" on Ginkgo node 25 --- FAIL: TestE2E (2773.77s) FAIL ** End ** Ginkgo ran 1 suite in 1h0m2.183267326s Test Suite Failed [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--debug is deprecated[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed--debug[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m F0112 18:23:36.521959 5612 tester.go:477] failed to run ginkgo tester: exit status 1 I0112 18:23:36.527869 5522 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops toolbox dump --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user I0112 18:23:36.527961 5522 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops toolbox dump --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ec2-user I0112 18:23:36.603492 14092 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true 2023/01/12 18:23:55 Dumping node i-01daa1f0ea8dcef5d 2023/01/12 18:24:04 error dumping node i-01daa1f0ea8dcef5d: error executing command "sysctl -a": Process exited with status 127 2023/01/12 18:24:04 Dumping node i-03f9dde5751a3fd38 2023/01/12 18:24:11 error dumping node i-03f9dde5751a3fd38: error executing command "sysctl -a": Process exited with status 127 2023/01/12 18:24:11 Dumping node i-064d67fb1979934c5 2023/01/12 18:24:14 error dumping node i-064d67fb1979934c5: error executing command "sysctl -a": Process exited with status 127 2023/01/12 18:24:14 Dumping node i-06a506de3e6c2b98a 2023/01/12 18:24:20 error dumping node i-06a506de3e6c2b98a: error executing command "sysctl -a": Process exited with status 127 2023/01/12 18:24:20 Dumping node i-06e12471aa18677f8 2023/01/12 18:24:31 error dumping node i-06e12471aa18677f8: error executing command "sysctl -a": Process exited with status 127 I0112 18:24:31.608986 5522 dumplogs.go:79] /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops get cluster --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io -o yaml I0112 18:24:31.609062 5522 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops get cluster --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io -o yaml I0112 18:24:32.176325 5522 dumplogs.go:79] /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops get instancegroups --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io -o yaml I0112 18:24:32.176386 5522 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/554861e1-929c-11ed-901d-e2a8de243d6a/kops get instancegroups --name e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io -o yaml I0112 18:24:33.051838 5522 dumplogs.go:98] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info I0112 18:24:33.051896 5522 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info ... skipping 386 lines ... route-table:rtb-0cec3fcfa0522e092 ok vpc:vpc-03a21af6319fddbd7 ok dhcp-options:dopt-029e0dbabed4a8d69 ok Deleted kubectl config for e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io Deleted cluster: "e2e-e2e-kops-grid-cilium-eni-amzn2-k26.test-cncf-aws.k8s.io" Error: exit status 255 + EXIT_VALUE=1 + set +o xtrace