Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 36m27s |
Revision | master |
... skipping 157 lines ... I0416 04:11:44.135289 5599 dumplogs.go:78] /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kops get cluster --name e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io -o yaml I0416 04:11:44.756084 5599 dumplogs.go:78] /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kops get instancegroups --name e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io -o yaml I0416 04:11:45.693079 5599 dumplogs.go:97] kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info I0416 04:11:45.747032 5599 dumplogs.go:188] /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kops toolbox dump --name e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user -o yaml W0416 04:11:52.540858 5599 dumplogs.go:270] ControlPlane instance not found from kops toolbox dump I0416 04:11:52.541047 5599 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml W0416 04:11:52.608510 5599 dumplogs.go:132] Failed to get csinodes: exit status 1 I0416 04:11:52.608626 5599 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml W0416 04:11:52.666032 5599 dumplogs.go:132] Failed to get csidrivers: exit status 1 I0416 04:11:52.666146 5599 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml W0416 04:11:52.727505 5599 dumplogs.go:132] Failed to get storageclasses: exit status 1 I0416 04:11:52.727621 5599 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml W0416 04:11:52.789028 5599 dumplogs.go:132] Failed to get persistentvolumes: exit status 1 I0416 04:11:52.789198 5599 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml W0416 04:11:52.841459 5599 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1 I0416 04:11:52.841598 5599 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml W0416 04:11:52.893598 5599 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1 W0416 04:11:52.946348 5599 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1 I0416 04:11:52.946397 5599 down.go:48] /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kops delete cluster --name e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --yes I0416 04:11:52.963700 5737 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true I0416 04:11:52.963802 5737 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true TYPE NAME ID keypair kubernetes.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io-a4:44:93:63:d1:46:cf:75:92:ff:3a:37:96:28:f8:c9 key-08c76896de898a5b0 route53-record api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io. ZEMLNXIIWQ0RV/A/api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io. ... skipping 2 lines ... route53-record:ZEMLNXIIWQ0RV/A/api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io. ok keypair:key-08c76896de898a5b0 ok Deleted cluster: "e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io" I0416 04:12:05.943864 5599 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2022/04/16 04:12:05 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I0416 04:12:05.961313 5599 http.go:37] curl https://ip.jsb.workers.dev I0416 04:12:06.055050 5599 up.go:156] /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kops create cluster --name e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.8 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=amazon/amzn2-ami-kernel-5.10-hvm-2.0.20220316.0-x86_64-gp2 --channel=alpha --networking=flannel --container-runtime=containerd --admin-access 104.197.232.96/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-south-1a --master-size c5.large I0416 04:12:06.074225 5748 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true I0416 04:12:06.074345 5748 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true I0416 04:12:06.099046 5748 create_cluster.go:831] Using SSH public key: /etc/aws-ssh/aws-ssh-public I0416 04:12:06.693862 5748 new_cluster.go:1072] Cloud Provider ID = aws ... skipping 521 lines ... I0416 04:12:35.477991 5599 up.go:240] /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kops validate cluster --name e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --count 10 --wait 15m0s I0416 04:12:35.499688 5786 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true I0416 04:12:35.499827 5786 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true Validating cluster e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io W0416 04:12:37.259420 5786 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:12:47.297566 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:12:57.332140 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:13:07.386504 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:13:17.426193 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:13:27.466633 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:13:37.500806 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:13:47.539270 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:13:57.583781 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:14:07.621508 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:14:17.663698 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:14:27.715497 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:14:37.751501 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:14:47.810525 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:14:57.841776 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:15:07.878007 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:15:17.912875 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:15:27.952957 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:15:37.996760 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:15:48.036329 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:15:58.086659 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed W0416 04:16:08.120369 5786 validate_cluster.go:232] (will retry): cluster not yet healthy W0416 04:16:48.162267 5786 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 203.0.113.123:443: i/o timeout INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a NODE STATUS ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/ebs-csi-node-cc99b system-node-critical pod "ebs-csi-node-cc99b" is pending Pod kube-system/ebs-csi-node-vr5nm system-node-critical pod "ebs-csi-node-vr5nm" is pending Pod kube-system/ebs-csi-node-wxgk2 system-node-critical pod "ebs-csi-node-wxgk2" is pending Validation Failed W0416 04:17:03.442373 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/ebs-csi-node-cc99b system-node-critical pod "ebs-csi-node-cc99b" is pending Pod kube-system/ebs-csi-node-wxgk2 system-node-critical pod "ebs-csi-node-wxgk2" is pending Validation Failed W0416 04:17:17.042331 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a ... skipping 7 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/kube-proxy-ip-172-20-50-117.ap-south-1.compute.internal system-node-critical pod "kube-proxy-ip-172-20-50-117.ap-south-1.compute.internal" is pending Pod kube-system/kube-proxy-ip-172-20-63-100.ap-south-1.compute.internal system-node-critical pod "kube-proxy-ip-172-20-63-100.ap-south-1.compute.internal" is pending Validation Failed W0416 04:17:30.581842 5786 validate_cluster.go:232] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-south-1a Master c5.large 1 1 ap-south-1a nodes-ap-south-1a Node t3.medium 4 4 ap-south-1a ... skipping 533 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 257 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: windows-gcepd] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m ... skipping 9 lines ... [AfterEach] [sig-api-machinery] client-go should negotiate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:20:22.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":1,"skipped":10,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:23.059: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 55 lines ... [1mSTEP[0m: Destroying namespace "node-problem-detector-5687" for this suite. [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [2.396 seconds][0m [sig-node] NodeProblemDetector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m [36m[1mshould run without error [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55 [90m------------------------------[0m ... skipping 17 lines ... [1mSTEP[0m: Destroying namespace "pod-disks-8499" for this suite. [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [2.393 seconds][0m [sig-storage] Pod Disks [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [36m[1mshould be able to delete a non-existent PD without error [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449[0m [36mRequires at least 2 nodes (not 0)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [90m------------------------------[0m ... skipping 39 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:20:24.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "node-lease-test-3543" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:5.307 seconds][0m [sig-api-machinery] ServerSideApply [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should ignore conflict errors if force apply is used [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:482[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":1,"skipped":11,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 33 lines ... [32m• [SLOW TEST:7.398 seconds][0m [sig-network] EndpointSlice [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should support creating EndpointSlice API operations [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:7.584 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m evictions: no PDB => should allow an eviction [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 5 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:20:24.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587" in namespace "downward-api-4301" to be "Succeeded or Failed" Apr 16 04:20:24.471: INFO: Pod "downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587": Phase="Pending", Reason="", readiness=false. Elapsed: 235.131006ms Apr 16 04:20:26.712: INFO: Pod "downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475644332s Apr 16 04:20:28.949: INFO: Pod "downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.712686126s Apr 16 04:20:31.184: INFO: Pod "downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.948501893s [1mSTEP[0m: Saw pod success Apr 16 04:20:31.185: INFO: Pod "downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587" satisfied condition "Succeeded or Failed" Apr 16 04:20:31.421: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:31.907: INFO: Waiting for pod downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587 to disappear Apr 16 04:20:32.146: INFO: Pod downwardapi-volume-2dcbe21c-4e2e-4199-8274-65c3e7ded587 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.804 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:32.886: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: block] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 30 lines ... Apr 16 04:20:23.378: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-0f8cea55-7740-49c8-9423-0d5f11a58362 [1mSTEP[0m: Creating a pod to test consume configMaps Apr 16 04:20:24.320: INFO: Waiting up to 5m0s for pod "pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9" in namespace "configmap-5959" to be "Succeeded or Failed" Apr 16 04:20:24.555: INFO: Pod "pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9": Phase="Pending", Reason="", readiness=false. Elapsed: 235.032894ms Apr 16 04:20:26.792: INFO: Pod "pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471620835s Apr 16 04:20:29.031: INFO: Pod "pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.710351494s Apr 16 04:20:31.267: INFO: Pod "pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.946670703s [1mSTEP[0m: Saw pod success Apr 16 04:20:31.267: INFO: Pod "pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9" satisfied condition "Succeeded or Failed" Apr 16 04:20:31.502: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:31.992: INFO: Waiting for pod pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9 to disappear Apr 16 04:20:32.227: INFO: Pod pod-configmaps-32b97766-45d5-4742-9a9e-8dbaf5f22af9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:10.889 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:32.956: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 25 lines ... Apr 16 04:20:24.129: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-ef1fedb7-c690-4cdf-a647-8dbc189e853f [1mSTEP[0m: Creating a pod to test consume secrets Apr 16 04:20:25.092: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d" in namespace "projected-248" to be "Succeeded or Failed" Apr 16 04:20:25.327: INFO: Pod "pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d": Phase="Pending", Reason="", readiness=false. Elapsed: 234.765603ms Apr 16 04:20:27.563: INFO: Pod "pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470501263s Apr 16 04:20:29.800: INFO: Pod "pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707663208s Apr 16 04:20:32.036: INFO: Pod "pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.943532001s Apr 16 04:20:34.273: INFO: Pod "pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.180232072s [1mSTEP[0m: Saw pod success Apr 16 04:20:34.273: INFO: Pod "pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d" satisfied condition "Succeeded or Failed" Apr 16 04:20:34.508: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:34.993: INFO: Waiting for pod pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d to disappear Apr 16 04:20:35.228: INFO: Pod pod-projected-secrets-3d1b85b3-a540-4c88-95e2-ffde0aec063d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:13.862 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 35 lines ... [32m• [SLOW TEST:15.999 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should allow pods to hairpin back to themselves through services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1007[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [32m• [SLOW TEST:15.471 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a persistent volume claim with a storage class [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:530[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":2,"skipped":23,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 344 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m version v1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74[0m should proxy through a service and a pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:40.428: INFO: Only supported for providers [openstack] (not aws) ... skipping 154 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452[0m that expects a client request [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453[0m should support a client that connects, sends DATA, and disconnects [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":1,"skipped":10,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:45.186: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 18 lines ... Apr 16 04:20:29.491: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488 [1mSTEP[0m: Creating a pod to test service account token: Apr 16 04:20:30.904: INFO: Waiting up to 5m0s for pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" in namespace "svcaccounts-3997" to be "Succeeded or Failed" Apr 16 04:20:31.139: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 235.097938ms Apr 16 04:20:33.374: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.470360726s [1mSTEP[0m: Saw pod success Apr 16 04:20:33.375: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" satisfied condition "Succeeded or Failed" Apr 16 04:20:33.609: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:34.087: INFO: Waiting for pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 to disappear Apr 16 04:20:34.323: INFO: Pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 no longer exists [1mSTEP[0m: Creating a pod to test service account token: Apr 16 04:20:34.580: INFO: Waiting up to 5m0s for pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" in namespace "svcaccounts-3997" to be "Succeeded or Failed" Apr 16 04:20:34.815: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 234.869922ms Apr 16 04:20:37.052: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47207558s Apr 16 04:20:39.288: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.707912904s [1mSTEP[0m: Saw pod success Apr 16 04:20:39.288: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" satisfied condition "Succeeded or Failed" Apr 16 04:20:39.523: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:40.001: INFO: Waiting for pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 to disappear Apr 16 04:20:40.236: INFO: Pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 no longer exists [1mSTEP[0m: Creating a pod to test service account token: Apr 16 04:20:40.471: INFO: Waiting up to 5m0s for pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" in namespace "svcaccounts-3997" to be "Succeeded or Failed" Apr 16 04:20:40.706: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 234.850466ms Apr 16 04:20:42.943: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.471375634s [1mSTEP[0m: Saw pod success Apr 16 04:20:42.943: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" satisfied condition "Succeeded or Failed" Apr 16 04:20:43.179: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:43.659: INFO: Waiting for pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 to disappear Apr 16 04:20:43.893: INFO: Pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 no longer exists [1mSTEP[0m: Creating a pod to test service account token: Apr 16 04:20:44.131: INFO: Waiting up to 5m0s for pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" in namespace "svcaccounts-3997" to be "Succeeded or Failed" Apr 16 04:20:44.367: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 235.675793ms Apr 16 04:20:46.603: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.471726216s [1mSTEP[0m: Saw pod success Apr 16 04:20:46.603: INFO: Pod "test-pod-231a9351-d927-4ff0-a338-85c58238c8c7" satisfied condition "Succeeded or Failed" Apr 16 04:20:46.838: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:47.313: INFO: Waiting for pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 to disappear Apr 16 04:20:47.548: INFO: Pod test-pod-231a9351-d927-4ff0-a338-85c58238c8c7 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:18.530 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":9,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:48.045: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 74 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating pod pod-subpath-test-secret-hpf2 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Apr 16 04:20:24.382: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hpf2" in namespace "subpath-8452" to be "Succeeded or Failed" Apr 16 04:20:24.621: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 239.145098ms Apr 16 04:20:26.860: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47809734s Apr 16 04:20:29.100: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.717919005s Apr 16 04:20:31.340: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.9582023s Apr 16 04:20:33.583: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Running", Reason="", readiness=true. Elapsed: 9.201353196s Apr 16 04:20:35.823: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Running", Reason="", readiness=true. Elapsed: 11.441354528s ... skipping 3 lines ... Apr 16 04:20:44.792: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Running", Reason="", readiness=true. Elapsed: 20.410182084s Apr 16 04:20:47.034: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Running", Reason="", readiness=true. Elapsed: 22.652198942s Apr 16 04:20:49.273: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Running", Reason="", readiness=true. Elapsed: 24.891192728s Apr 16 04:20:51.513: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Running", Reason="", readiness=true. Elapsed: 27.130741966s Apr 16 04:20:53.757: INFO: Pod "pod-subpath-test-secret-hpf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.375043502s [1mSTEP[0m: Saw pod success Apr 16 04:20:53.757: INFO: Pod "pod-subpath-test-secret-hpf2" satisfied condition "Succeeded or Failed" Apr 16 04:20:53.996: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-secret-hpf2 container test-container-subpath-secret-hpf2: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:54.494: INFO: Waiting for pod pod-subpath-test-secret-hpf2 to disappear Apr 16 04:20:54.733: INFO: Pod pod-subpath-test-secret-hpf2 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-secret-hpf2 Apr 16 04:20:54.733: INFO: Deleting pod "pod-subpath-test-secret-hpf2" in namespace "subpath-8452" ... skipping 8 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Atomic writer volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34[0m should support subpaths with secret pod [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:55.699: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 56 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a container with runAsNonRoot [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104[0m should not run with an explicit root user ID [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":8,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:56.103: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping ... skipping 31 lines ... [It] should support file as subpath [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230 Apr 16 04:20:24.759: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Apr 16 04:20:24.759: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-fqvq [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Apr 16 04:20:25.000: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fqvq" in namespace "provisioning-5507" to be "Succeeded or Failed" Apr 16 04:20:25.237: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 237.322414ms Apr 16 04:20:27.476: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475696226s Apr 16 04:20:29.713: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713430208s Apr 16 04:20:31.951: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.951193087s Apr 16 04:20:34.190: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Running", Reason="", readiness=true. Elapsed: 9.190236363s Apr 16 04:20:36.429: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Running", Reason="", readiness=true. Elapsed: 11.428737689s ... skipping 3 lines ... Apr 16 04:20:45.379: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Running", Reason="", readiness=true. Elapsed: 20.379426838s Apr 16 04:20:47.618: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Running", Reason="", readiness=true. Elapsed: 22.61808018s Apr 16 04:20:49.857: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Running", Reason="", readiness=true. Elapsed: 24.857023292s Apr 16 04:20:52.095: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Running", Reason="", readiness=true. Elapsed: 27.095225058s Apr 16 04:20:54.333: INFO: Pod "pod-subpath-test-inlinevolume-fqvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.332959996s [1mSTEP[0m: Saw pod success Apr 16 04:20:54.333: INFO: Pod "pod-subpath-test-inlinevolume-fqvq" satisfied condition "Succeeded or Failed" Apr 16 04:20:54.570: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-fqvq container test-container-subpath-inlinevolume-fqvq: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:55.056: INFO: Waiting for pod pod-subpath-test-inlinevolume-fqvq to disappear Apr 16 04:20:55.295: INFO: Pod pod-subpath-test-inlinevolume-fqvq no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-fqvq Apr 16 04:20:55.295: INFO: Deleting pod "pod-subpath-test-inlinevolume-fqvq" in namespace "provisioning-5507" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:56.503: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: hostPath] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver hostPath doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 24 lines ... Apr 16 04:20:48.392: INFO: PersistentVolumeClaim pvc-gnk5p found but phase is Pending instead of Bound. Apr 16 04:20:50.627: INFO: PersistentVolumeClaim pvc-gnk5p found and phase=Bound (4.706307175s) Apr 16 04:20:50.627: INFO: Waiting up to 3m0s for PersistentVolume local-psmhp to have phase Bound Apr 16 04:20:50.862: INFO: PersistentVolume local-psmhp found and phase=Bound (234.913744ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-j9ln [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:20:51.570: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j9ln" in namespace "provisioning-4276" to be "Succeeded or Failed" Apr 16 04:20:51.805: INFO: Pod "pod-subpath-test-preprovisionedpv-j9ln": Phase="Pending", Reason="", readiness=false. Elapsed: 235.045946ms Apr 16 04:20:54.042: INFO: Pod "pod-subpath-test-preprovisionedpv-j9ln": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.472173075s [1mSTEP[0m: Saw pod success Apr 16 04:20:54.042: INFO: Pod "pod-subpath-test-preprovisionedpv-j9ln" satisfied condition "Succeeded or Failed" Apr 16 04:20:54.278: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-j9ln container test-container-subpath-preprovisionedpv-j9ln: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:54.753: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j9ln to disappear Apr 16 04:20:54.988: INFO: Pod pod-subpath-test-preprovisionedpv-j9ln no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-j9ln Apr 16 04:20:54.988: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j9ln" in namespace "provisioning-4276" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":9,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:20:58.215: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 42 lines ... [32m• [SLOW TEST:10.303 seconds][0m [sig-apps] ReplicationController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should serve a basic image on each replica with a public image [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 21 lines ... [32m• [SLOW TEST:14.756 seconds][0m [sig-node] Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be submitted and removed [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 18 lines ... Apr 16 04:20:33.173: INFO: PersistentVolumeClaim pvc-k6p86 found but phase is Pending instead of Bound. Apr 16 04:20:35.417: INFO: PersistentVolumeClaim pvc-k6p86 found and phase=Bound (2.481791245s) Apr 16 04:20:35.417: INFO: Waiting up to 3m0s for PersistentVolume local-5fdq9 to have phase Bound Apr 16 04:20:35.657: INFO: PersistentVolume local-5fdq9 found and phase=Bound (240.228151ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-dhvf [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:20:36.373: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dhvf" in namespace "provisioning-5856" to be "Succeeded or Failed" Apr 16 04:20:36.611: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 237.688658ms Apr 16 04:20:38.850: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476935326s Apr 16 04:20:41.089: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715693396s Apr 16 04:20:43.327: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953838174s Apr 16 04:20:45.565: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.191392897s Apr 16 04:20:47.803: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.429804609s Apr 16 04:20:50.044: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.670483763s Apr 16 04:20:52.282: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.909329598s Apr 16 04:20:54.521: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.14766828s Apr 16 04:20:56.759: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.385897149s [1mSTEP[0m: Saw pod success Apr 16 04:20:56.759: INFO: Pod "pod-subpath-test-preprovisionedpv-dhvf" satisfied condition "Succeeded or Failed" Apr 16 04:20:56.996: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-dhvf container test-container-subpath-preprovisionedpv-dhvf: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:57.479: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dhvf to disappear Apr 16 04:20:57.718: INFO: Pod pod-subpath-test-preprovisionedpv-dhvf no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-dhvf Apr 16 04:20:57.718: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dhvf" in namespace "provisioning-5856" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":0,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:20:55.735: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 16 04:20:57.172: INFO: Waiting up to 5m0s for pod "security-context-3a9b3bde-f9f2-43d5-ad53-89dd3a71bed2" in namespace "security-context-3261" to be "Succeeded or Failed" Apr 16 04:20:57.412: INFO: Pod "security-context-3a9b3bde-f9f2-43d5-ad53-89dd3a71bed2": Phase="Pending", Reason="", readiness=false. Elapsed: 239.56673ms Apr 16 04:20:59.653: INFO: Pod "security-context-3a9b3bde-f9f2-43d5-ad53-89dd3a71bed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.480434701s [1mSTEP[0m: Saw pod success Apr 16 04:20:59.653: INFO: Pod "security-context-3a9b3bde-f9f2-43d5-ad53-89dd3a71bed2" satisfied condition "Succeeded or Failed" Apr 16 04:20:59.895: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod security-context-3a9b3bde-f9f2-43d5-ad53-89dd3a71bed2 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:00.396: INFO: Waiting for pod security-context-3a9b3bde-f9f2-43d5-ad53-89dd3a71bed2 to disappear Apr 16 04:21:00.638: INFO: Pod security-context-3a9b3bde-f9f2-43d5-ad53-89dd3a71bed2 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.387 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 147 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:02.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-4611" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:03.203: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 22 lines ... [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-40a8b970-1f2c-4592-8b7c-d87625beffa7 [1mSTEP[0m: Creating a pod to test consume configMaps Apr 16 04:21:01.620: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5697fe2-8fc4-4d7c-9aff-69cef2c5c503" in namespace "configmap-7029" to be "Succeeded or Failed" Apr 16 04:21:01.854: INFO: Pod "pod-configmaps-a5697fe2-8fc4-4d7c-9aff-69cef2c5c503": Phase="Pending", Reason="", readiness=false. Elapsed: 234.113034ms Apr 16 04:21:04.089: INFO: Pod "pod-configmaps-a5697fe2-8fc4-4d7c-9aff-69cef2c5c503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.469520682s [1mSTEP[0m: Saw pod success Apr 16 04:21:04.089: INFO: Pod "pod-configmaps-a5697fe2-8fc4-4d7c-9aff-69cef2c5c503" satisfied condition "Succeeded or Failed" Apr 16 04:21:04.323: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-configmaps-a5697fe2-8fc4-4d7c-9aff-69cef2c5c503 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:04.797: INFO: Waiting for pod pod-configmaps-a5697fe2-8fc4-4d7c-9aff-69cef2c5c503 to disappear Apr 16 04:21:05.032: INFO: Pod pod-configmaps-a5697fe2-8fc4-4d7c-9aff-69cef2c5c503 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 14 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Apr 16 04:21:02.577: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-697" to be "Succeeded or Failed" Apr 16 04:21:02.816: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 239.006867ms Apr 16 04:21:05.056: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.479317615s Apr 16 04:21:05.056: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:05.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-697" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":6,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 35 lines ... [32m• [SLOW TEST:26.556 seconds][0m [sig-network] HostPort [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:07.126: INFO: Only supported for providers [openstack] (not aws) ... skipping 23 lines ... Apr 16 04:21:03.215: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test substitution in volume subpath Apr 16 04:21:04.642: INFO: Waiting up to 5m0s for pod "var-expansion-4ba70f15-445c-485f-9187-5aff1128a7ea" in namespace "var-expansion-1491" to be "Succeeded or Failed" Apr 16 04:21:04.877: INFO: Pod "var-expansion-4ba70f15-445c-485f-9187-5aff1128a7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 234.941429ms Apr 16 04:21:07.114: INFO: Pod "var-expansion-4ba70f15-445c-485f-9187-5aff1128a7ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.472630444s [1mSTEP[0m: Saw pod success Apr 16 04:21:07.115: INFO: Pod "var-expansion-4ba70f15-445c-485f-9187-5aff1128a7ea" satisfied condition "Succeeded or Failed" Apr 16 04:21:07.350: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod var-expansion-4ba70f15-445c-485f-9187-5aff1128a7ea container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:07.828: INFO: Waiting for pod var-expansion-4ba70f15-445c-485f-9187-5aff1128a7ea to disappear Apr 16 04:21:08.064: INFO: Pod var-expansion-4ba70f15-445c-485f-9187-5aff1128a7ea no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.321 seconds][0m [sig-node] Variable Expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should allow substituting values in a volume subpath [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:08.555: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 33 lines ... [1mSTEP[0m: Destroying namespace "apply-780" for this suite. [AfterEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":6,"skipped":33,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:12.646: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 106 lines ... [36mDriver local doesn't support InlineVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":12,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:05.514: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename pods [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 8 lines ... Apr 16 04:21:11.412: INFO: The status of Pod pod-update-activedeadlineseconds-08bddc5b-e38c-4cf5-8b1b-948bfe2a1f79 is Pending, waiting for it to be Running (with Ready = true) Apr 16 04:21:13.412: INFO: The status of Pod pod-update-activedeadlineseconds-08bddc5b-e38c-4cf5-8b1b-948bfe2a1f79 is Running (Ready = true) [1mSTEP[0m: verifying the pod is in kubernetes [1mSTEP[0m: updating the pod Apr 16 04:21:14.857: INFO: Successfully updated pod "pod-update-activedeadlineseconds-08bddc5b-e38c-4cf5-8b1b-948bfe2a1f79" Apr 16 04:21:14.857: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-08bddc5b-e38c-4cf5-8b1b-948bfe2a1f79" in namespace "pods-6497" to be "terminated due to deadline exceeded" Apr 16 04:21:15.092: INFO: Pod "pod-update-activedeadlineseconds-08bddc5b-e38c-4cf5-8b1b-948bfe2a1f79": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 234.120851ms Apr 16 04:21:15.092: INFO: Pod "pod-update-activedeadlineseconds-08bddc5b-e38c-4cf5-8b1b-948bfe2a1f79" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:15.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-6497" for this suite. ... skipping 79 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should contain last line of the log [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:615[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 31 lines ... Apr 16 04:20:24.699: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 16 04:20:24.937: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support file as subpath [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230 Apr 16 04:20:25.411: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Apr 16 04:20:26.124: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5931" in namespace "provisioning-5931" to be "Succeeded or Failed" Apr 16 04:20:26.364: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 239.958563ms Apr 16 04:20:28.602: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477639742s Apr 16 04:20:30.849: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.724694291s Apr 16 04:20:33.087: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 6.96258117s Apr 16 04:20:35.326: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 9.201072426s Apr 16 04:20:37.563: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 11.43869073s Apr 16 04:20:39.802: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 13.67746589s Apr 16 04:20:42.040: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 15.9154848s Apr 16 04:20:44.279: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 18.154226175s Apr 16 04:20:46.517: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.392853972s [1mSTEP[0m: Saw pod success Apr 16 04:20:46.517: INFO: Pod "hostpath-symlink-prep-provisioning-5931" satisfied condition "Succeeded or Failed" Apr 16 04:20:46.518: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5931" in namespace "provisioning-5931" Apr 16 04:20:46.757: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5931" to be fully deleted Apr 16 04:20:46.994: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-rjsf [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Apr 16 04:20:47.235: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-rjsf" in namespace "provisioning-5931" to be "Succeeded or Failed" Apr 16 04:20:47.474: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Pending", Reason="", readiness=false. Elapsed: 238.119784ms Apr 16 04:20:49.712: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476735953s Apr 16 04:20:51.950: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 4.714667174s Apr 16 04:20:54.189: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 6.953828278s Apr 16 04:20:56.428: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 9.192779129s Apr 16 04:20:58.667: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 11.431963296s Apr 16 04:21:00.906: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 13.670915036s Apr 16 04:21:03.144: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 15.909067711s Apr 16 04:21:05.382: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 18.147090087s Apr 16 04:21:07.620: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 20.385039558s Apr 16 04:21:09.859: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Running", Reason="", readiness=true. Elapsed: 22.623512661s Apr 16 04:21:12.097: INFO: Pod "pod-subpath-test-inlinevolume-rjsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.861713164s [1mSTEP[0m: Saw pod success Apr 16 04:21:12.097: INFO: Pod "pod-subpath-test-inlinevolume-rjsf" satisfied condition "Succeeded or Failed" Apr 16 04:21:12.334: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-rjsf container test-container-subpath-inlinevolume-rjsf: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:12.823: INFO: Waiting for pod pod-subpath-test-inlinevolume-rjsf to disappear Apr 16 04:21:13.062: INFO: Pod pod-subpath-test-inlinevolume-rjsf no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-rjsf Apr 16 04:21:13.062: INFO: Deleting pod "pod-subpath-test-inlinevolume-rjsf" in namespace "provisioning-5931" [1mSTEP[0m: Deleting pod Apr 16 04:21:13.306: INFO: Deleting pod "pod-subpath-test-inlinevolume-rjsf" in namespace "provisioning-5931" Apr 16 04:21:13.784: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5931" in namespace "provisioning-5931" to be "Succeeded or Failed" Apr 16 04:21:14.021: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 237.279543ms Apr 16 04:21:16.259: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47541712s Apr 16 04:21:18.497: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71313226s Apr 16 04:21:20.735: INFO: Pod "hostpath-symlink-prep-provisioning-5931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.951540386s [1mSTEP[0m: Saw pod success Apr 16 04:21:20.735: INFO: Pod "hostpath-symlink-prep-provisioning-5931" satisfied condition "Succeeded or Failed" Apr 16 04:21:20.735: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5931" in namespace "provisioning-5931" Apr 16 04:21:20.982: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5931" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:21.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-5931" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":15,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:15.579: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename container-runtime [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m on terminated container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134[0m should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":12,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... W0416 04:20:22.445715 6517 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 16 04:20:22.445: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 16 04:20:22.682: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: creating the pod Apr 16 04:20:23.151: INFO: PodSpec: initContainers in spec.initContainers Apr 16 04:21:22.105: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4cc0d84b-37e3-404b-ab49-171fb802c3c8", GenerateName:"", Namespace:"init-container-9514", SelfLink:"", UID:"51b78c75-819c-410c-8e1c-ad01c557be39", ResourceVersion:"3281", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785679623, loc:(*time.Location)(0xa0acfa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"151068813"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00432c960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00432c978), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00432c990), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00432c9a8), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-xg4ww", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004502ba0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-xg4ww", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-xg4ww", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-xg4ww", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003d53028), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-56-43.ap-south-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003aeb180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003d530a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003d530c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003d530c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003d530cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00433d300), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63785679623, loc:(*time.Location)(0xa0acfa0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63785679623, loc:(*time.Location)(0xa0acfa0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63785679623, loc:(*time.Location)(0xa0acfa0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63785679623, loc:(*time.Location)(0xa0acfa0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.56.43", PodIP:"100.96.1.3", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.3"}}, StartTime:(*v1.Time)(0xc00432c9d8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003aeb260)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003aeb2d0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://dee306dbb82677d202a90c71d08bf53b1060e490d60ab858d44597243ff2ec3b", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004502ca0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004502c80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc003d5314f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:22.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-9514" for this suite. [32m• [SLOW TEST:61.087 seconds][0m [sig-node] InitContainer [NodeConformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should not start app containers if init containers fail on a RestartAlways pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 8 lines ... Apr 16 04:20:26.137: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-44676gn9g [1mSTEP[0m: creating a claim Apr 16 04:20:26.377: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-8kxl [1mSTEP[0m: Creating a pod to test exec-volume-test Apr 16 04:20:27.096: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-8kxl" in namespace "volume-4467" to be "Succeeded or Failed" Apr 16 04:20:27.335: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 238.761224ms Apr 16 04:20:29.575: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479020134s Apr 16 04:20:31.815: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719003832s Apr 16 04:20:34.056: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.960253126s Apr 16 04:20:36.297: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200852278s Apr 16 04:20:38.538: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.441712684s ... skipping 3 lines ... Apr 16 04:20:47.498: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 20.401621245s Apr 16 04:20:49.739: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 22.642368226s Apr 16 04:20:51.979: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 24.882525756s Apr 16 04:20:54.219: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Pending", Reason="", readiness=false. Elapsed: 27.122698677s Apr 16 04:20:56.459: INFO: Pod "exec-volume-test-dynamicpv-8kxl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.362534956s [1mSTEP[0m: Saw pod success Apr 16 04:20:56.459: INFO: Pod "exec-volume-test-dynamicpv-8kxl" satisfied condition "Succeeded or Failed" Apr 16 04:20:56.698: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod exec-volume-test-dynamicpv-8kxl container exec-container-dynamicpv-8kxl: <nil> [1mSTEP[0m: delete the pod Apr 16 04:20:57.183: INFO: Waiting for pod exec-volume-test-dynamicpv-8kxl to disappear Apr 16 04:20:57.421: INFO: Pod exec-volume-test-dynamicpv-8kxl no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-8kxl Apr 16 04:20:57.421: INFO: Deleting pod "exec-volume-test-dynamicpv-8kxl" in namespace "volume-4467" ... skipping 20 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (ext4)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":3,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:22.588: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename resourcequota [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 33 lines ... [32m• [SLOW TEST:32.520 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should support cascading deletion of custom resources [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:915[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":2,"skipped":14,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:29.064: INFO: Only supported for providers [azure] (not aws) ... skipping 62 lines ... Apr 16 04:21:17.858: INFO: PersistentVolumeClaim pvc-xxrqr found but phase is Pending instead of Bound. Apr 16 04:21:20.096: INFO: PersistentVolumeClaim pvc-xxrqr found and phase=Bound (4.715336426s) Apr 16 04:21:20.096: INFO: Waiting up to 3m0s for PersistentVolume local-6zhng to have phase Bound Apr 16 04:21:20.332: INFO: PersistentVolume local-6zhng found and phase=Bound (236.223526ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-prmg [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:21:21.043: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-prmg" in namespace "provisioning-6294" to be "Succeeded or Failed" Apr 16 04:21:21.280: INFO: Pod "pod-subpath-test-preprovisionedpv-prmg": Phase="Pending", Reason="", readiness=false. Elapsed: 236.330557ms Apr 16 04:21:23.517: INFO: Pod "pod-subpath-test-preprovisionedpv-prmg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474196527s Apr 16 04:21:25.755: INFO: Pod "pod-subpath-test-preprovisionedpv-prmg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.711843746s [1mSTEP[0m: Saw pod success Apr 16 04:21:25.755: INFO: Pod "pod-subpath-test-preprovisionedpv-prmg" satisfied condition "Succeeded or Failed" Apr 16 04:21:25.992: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-prmg container test-container-volume-preprovisionedpv-prmg: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:26.483: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-prmg to disappear Apr 16 04:21:26.720: INFO: Pod pod-subpath-test-preprovisionedpv-prmg no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-prmg Apr 16 04:21:26.720: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-prmg" in namespace "provisioning-6294" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":39,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:30.000: INFO: Only supported for providers [gce gke] (not aws) ... skipping 44 lines ... Apr 16 04:21:17.792: INFO: PersistentVolumeClaim pvc-w77rz found but phase is Pending instead of Bound. Apr 16 04:21:20.027: INFO: PersistentVolumeClaim pvc-w77rz found and phase=Bound (13.655101919s) Apr 16 04:21:20.027: INFO: Waiting up to 3m0s for PersistentVolume local-59qkm to have phase Bound Apr 16 04:21:20.262: INFO: PersistentVolume local-59qkm found and phase=Bound (234.773951ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xbhg [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:21:20.969: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xbhg" in namespace "provisioning-2352" to be "Succeeded or Failed" Apr 16 04:21:21.204: INFO: Pod "pod-subpath-test-preprovisionedpv-xbhg": Phase="Pending", Reason="", readiness=false. Elapsed: 235.102321ms Apr 16 04:21:23.441: INFO: Pod "pod-subpath-test-preprovisionedpv-xbhg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471839039s Apr 16 04:21:25.677: INFO: Pod "pod-subpath-test-preprovisionedpv-xbhg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.708101s [1mSTEP[0m: Saw pod success Apr 16 04:21:25.677: INFO: Pod "pod-subpath-test-preprovisionedpv-xbhg" satisfied condition "Succeeded or Failed" Apr 16 04:21:25.913: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-xbhg container test-container-subpath-preprovisionedpv-xbhg: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:26.407: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xbhg to disappear Apr 16 04:21:26.645: INFO: Pod pod-subpath-test-preprovisionedpv-xbhg no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xbhg Apr 16 04:21:26.645: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xbhg" in namespace "provisioning-2352" ... skipping 22 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:31.430: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 48 lines ... [AfterEach] [sig-api-machinery] client-go should negotiate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:31.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":3,"skipped":25,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 11 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:31.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "metrics-grabber-6794" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":3,"skipped":19,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:32.268: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping ... skipping 71 lines ... [32m• [SLOW TEST:30.141 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for CRD preserving unknown fields at the schema root [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:32.927: INFO: Only supported for providers [openstack] (not aws) ... skipping 177 lines ... [1mSTEP[0m: Destroying namespace "services-538" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:34.805: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 45 lines ... [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:33.030: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192 Apr 16 04:21:34.468: INFO: found topology map[topology.kubernetes.io/zone:ap-south-1a] Apr 16 04:21:34.469: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Apr 16 04:21:34.469: INFO: Not enough topologies in cluster -- skipping [1mSTEP[0m: Deleting pvc [1mSTEP[0m: Deleting sc ... skipping 7 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: aws] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [It][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mNot enough topologies in cluster -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199 [90m------------------------------[0m ... skipping 149 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIStorageCapacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257[0m CSIStorageCapacity used, insufficient capacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":3,"skipped":29,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:36.489: INFO: Only supported for providers [openstack] (not aws) ... skipping 67 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl apply [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803[0m apply set/view last-applied [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:838[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":3,"skipped":7,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:37.298: INFO: Only supported for providers [vsphere] (not aws) ... skipping 95 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should return command exit codes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499[0m running a failing command [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:517[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":4,"skipped":9,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:37.569: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 156 lines ... [32m• [SLOW TEST:6.740 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes ... skipping 55 lines ... Apr 16 04:21:25.130: INFO: Waiting for pod aws-client to disappear Apr 16 04:21:25.365: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws [1mSTEP[0m: Deleting pv and pvc Apr 16 04:21:25.366: INFO: Deleting PersistentVolumeClaim "pvc-mrhm7" Apr 16 04:21:25.601: INFO: Deleting PersistentVolume "aws-6k9hd" Apr 16 04:21:26.176: INFO: Couldn't delete PD "aws://ap-south-1a/vol-073bed31300751a50", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-073bed31300751a50 is currently attached to i-0d4ed8f350edc312b status code: 400, request id: 5edff3ac-ee81-45c7-a28d-25314892d5fe Apr 16 04:21:32.236: INFO: Couldn't delete PD "aws://ap-south-1a/vol-073bed31300751a50", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-073bed31300751a50 is currently attached to i-0d4ed8f350edc312b status code: 400, request id: 2c47130c-7dbd-4de9-85fd-df7d8f8bcc8c Apr 16 04:21:38.366: INFO: Successfully deleted PD "aws://ap-south-1a/vol-073bed31300751a50". [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:38.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-6591" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (block volmode)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":1,"skipped":5,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:39.080: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 103 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:21:42.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "server-version-3021" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:42.692: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 64 lines ... Apr 16 04:21:38.259: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 16 04:21:39.686: INFO: Waiting up to 5m0s for pod "pod-c27d85dc-f8f0-45c1-90d4-6ca4af081cd5" in namespace "emptydir-9131" to be "Succeeded or Failed" Apr 16 04:21:39.924: INFO: Pod "pod-c27d85dc-f8f0-45c1-90d4-6ca4af081cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 237.177539ms Apr 16 04:21:42.162: INFO: Pod "pod-c27d85dc-f8f0-45c1-90d4-6ca4af081cd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.476008934s [1mSTEP[0m: Saw pod success Apr 16 04:21:42.162: INFO: Pod "pod-c27d85dc-f8f0-45c1-90d4-6ca4af081cd5" satisfied condition "Succeeded or Failed" Apr 16 04:21:42.406: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-c27d85dc-f8f0-45c1-90d4-6ca4af081cd5 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:42.891: INFO: Waiting for pod pod-c27d85dc-f8f0-45c1-90d4-6ca4af081cd5 to disappear Apr 16 04:21:43.128: INFO: Pod pod-c27d85dc-f8f0-45c1-90d4-6ca4af081cd5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.346 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:83.862 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted by liveness probe after startup probe enables it [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:377[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":1,"skipped":7,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 69 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-map-2886947f-7076-4dbd-9f7b-dd82064b90ae [1mSTEP[0m: Creating a pod to test consume secrets Apr 16 04:21:44.398: INFO: Waiting up to 5m0s for pod "pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b" in namespace "secrets-1244" to be "Succeeded or Failed" Apr 16 04:21:44.633: INFO: Pod "pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b": Phase="Pending", Reason="", readiness=false. Elapsed: 234.381893ms Apr 16 04:21:46.868: INFO: Pod "pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470050503s Apr 16 04:21:49.112: INFO: Pod "pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.713332815s [1mSTEP[0m: Saw pod success Apr 16 04:21:49.112: INFO: Pod "pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b" satisfied condition "Succeeded or Failed" Apr 16 04:21:49.347: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:49.825: INFO: Waiting for pod pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b to disappear Apr 16 04:21:50.060: INFO: Pod pod-secrets-b24b95bd-c59b-431b-918c-8b6d1843e88b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 164 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:51.355: INFO: Only supported for providers [gce gke] (not aws) ... skipping 317 lines ... [36mDriver hostPath doesn't support ext4 -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:25.656: INFO: >>> kubeConfig: /root/.kube/config ... skipping 15 lines ... Apr 16 04:21:33.540: INFO: PersistentVolumeClaim pvc-lpgln found but phase is Pending instead of Bound. Apr 16 04:21:35.775: INFO: PersistentVolumeClaim pvc-lpgln found and phase=Bound (2.469907408s) Apr 16 04:21:35.776: INFO: Waiting up to 3m0s for PersistentVolume local-xxkz6 to have phase Bound Apr 16 04:21:36.020: INFO: PersistentVolume local-xxkz6 found and phase=Bound (244.894762ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-mvch [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:21:36.727: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mvch" in namespace "provisioning-9428" to be "Succeeded or Failed" Apr 16 04:21:36.962: INFO: Pod "pod-subpath-test-preprovisionedpv-mvch": Phase="Pending", Reason="", readiness=false. Elapsed: 234.940575ms Apr 16 04:21:39.198: INFO: Pod "pod-subpath-test-preprovisionedpv-mvch": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471418555s Apr 16 04:21:41.434: INFO: Pod "pod-subpath-test-preprovisionedpv-mvch": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707014059s Apr 16 04:21:43.670: INFO: Pod "pod-subpath-test-preprovisionedpv-mvch": Phase="Pending", Reason="", readiness=false. Elapsed: 6.943042779s Apr 16 04:21:45.910: INFO: Pod "pod-subpath-test-preprovisionedpv-mvch": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.183090457s [1mSTEP[0m: Saw pod success Apr 16 04:21:45.910: INFO: Pod "pod-subpath-test-preprovisionedpv-mvch" satisfied condition "Succeeded or Failed" Apr 16 04:21:46.162: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-mvch container test-container-volume-preprovisionedpv-mvch: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:46.662: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mvch to disappear Apr 16 04:21:46.903: INFO: Pod pod-subpath-test-preprovisionedpv-mvch no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-mvch Apr 16 04:21:46.903: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mvch" in namespace "provisioning-9428" ... skipping 462 lines ... [32m• [SLOW TEST:19.812 seconds][0m [sig-network] Service endpoints latency [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should not be very high [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":6,"skipped":57,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:54.704: INFO: Driver local doesn't support ext4 -- skipping ... skipping 23 lines ... Apr 16 04:21:51.391: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Apr 16 04:21:52.824: INFO: Waiting up to 5m0s for pod "pod-167f2e7c-d6e1-4968-b6b5-de0146378057" in namespace "emptydir-5925" to be "Succeeded or Failed" Apr 16 04:21:53.064: INFO: Pod "pod-167f2e7c-d6e1-4968-b6b5-de0146378057": Phase="Pending", Reason="", readiness=false. Elapsed: 239.821298ms Apr 16 04:21:55.303: INFO: Pod "pod-167f2e7c-d6e1-4968-b6b5-de0146378057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.479143872s [1mSTEP[0m: Saw pod success Apr 16 04:21:55.303: INFO: Pod "pod-167f2e7c-d6e1-4968-b6b5-de0146378057" satisfied condition "Succeeded or Failed" Apr 16 04:21:55.542: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-167f2e7c-d6e1-4968-b6b5-de0146378057 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:56.024: INFO: Waiting for pod pod-167f2e7c-d6e1-4968-b6b5-de0146378057 to disappear Apr 16 04:21:56.263: INFO: Pod pod-167f2e7c-d6e1-4968-b6b5-de0146378057 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.354 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:21:56.762: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 60 lines ... [36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":7,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:53.260: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-f4c6eae2-3329-407d-a957-dd26e9330494 [1mSTEP[0m: Creating a pod to test consume configMaps Apr 16 04:21:54.951: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e87986ec-6106-4c64-8d45-a3770fd283d7" in namespace "projected-6723" to be "Succeeded or Failed" Apr 16 04:21:55.188: INFO: Pod "pod-projected-configmaps-e87986ec-6106-4c64-8d45-a3770fd283d7": Phase="Pending", Reason="", readiness=false. Elapsed: 236.925216ms Apr 16 04:21:57.425: INFO: Pod "pod-projected-configmaps-e87986ec-6106-4c64-8d45-a3770fd283d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.474126367s [1mSTEP[0m: Saw pod success Apr 16 04:21:57.425: INFO: Pod "pod-projected-configmaps-e87986ec-6106-4c64-8d45-a3770fd283d7" satisfied condition "Succeeded or Failed" Apr 16 04:21:57.659: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-projected-configmaps-e87986ec-6106-4c64-8d45-a3770fd283d7 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:58.140: INFO: Waiting for pod pod-projected-configmaps-e87986ec-6106-4c64-8d45-a3770fd283d7 to disappear Apr 16 04:21:58.378: INFO: Pod pod-projected-configmaps-e87986ec-6106-4c64-8d45-a3770fd283d7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 30 lines ... Apr 16 04:21:48.453: INFO: PersistentVolumeClaim pvc-ggfs7 found but phase is Pending instead of Bound. Apr 16 04:21:50.692: INFO: PersistentVolumeClaim pvc-ggfs7 found and phase=Bound (6.96713574s) Apr 16 04:21:50.692: INFO: Waiting up to 3m0s for PersistentVolume local-zhfdn to have phase Bound Apr 16 04:21:50.931: INFO: PersistentVolume local-zhfdn found and phase=Bound (238.799081ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-lk6k [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:21:51.687: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lk6k" in namespace "provisioning-5202" to be "Succeeded or Failed" Apr 16 04:21:51.926: INFO: Pod "pod-subpath-test-preprovisionedpv-lk6k": Phase="Pending", Reason="", readiness=false. Elapsed: 238.909902ms Apr 16 04:21:54.166: INFO: Pod "pod-subpath-test-preprovisionedpv-lk6k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479030208s Apr 16 04:21:56.410: INFO: Pod "pod-subpath-test-preprovisionedpv-lk6k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.722212768s [1mSTEP[0m: Saw pod success Apr 16 04:21:56.410: INFO: Pod "pod-subpath-test-preprovisionedpv-lk6k" satisfied condition "Succeeded or Failed" Apr 16 04:21:56.648: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-lk6k container test-container-volume-preprovisionedpv-lk6k: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:57.134: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lk6k to disappear Apr 16 04:21:57.378: INFO: Pod pod-subpath-test-preprovisionedpv-lk6k no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-lk6k Apr 16 04:21:57.378: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lk6k" in namespace "provisioning-5202" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":10,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:00.607: INFO: Only supported for providers [vsphere] (not aws) ... skipping 67 lines ... [32m• [SLOW TEST:72.242 seconds][0m [sig-storage] Secrets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m optional updates should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:08.374: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 149 lines ... [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:20:29.638: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename cronjob [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should delete failed finished jobs with limit of one job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289 [1mSTEP[0m: Creating an AllowConcurrent cronjob with custom history limit [1mSTEP[0m: Ensuring a finished job exists [1mSTEP[0m: Ensuring a finished job exists by listing jobs explicitly [1mSTEP[0m: Ensuring this job and its pods does not exist anymore [1mSTEP[0m: Ensuring there is 1 finished job by listing jobs explicitly ... skipping 4 lines ... [1mSTEP[0m: Destroying namespace "cronjob-6642" for this suite. [32m• [SLOW TEST:99.582 seconds][0m [sig-apps] CronJob [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should delete failed finished jobs with limit of one job [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:21.072: INFO: >>> kubeConfig: /root/.kube/config ... skipping 18 lines ... Apr 16 04:21:32.863: INFO: PersistentVolumeClaim pvc-spl99 found but phase is Pending instead of Bound. Apr 16 04:21:35.101: INFO: PersistentVolumeClaim pvc-spl99 found and phase=Bound (4.710994676s) Apr 16 04:21:35.101: INFO: Waiting up to 3m0s for PersistentVolume local-4ldvf to have phase Bound Apr 16 04:21:35.336: INFO: PersistentVolume local-4ldvf found and phase=Bound (234.87756ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-h65p [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Apr 16 04:21:36.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h65p" in namespace "provisioning-9372" to be "Succeeded or Failed" Apr 16 04:21:36.280: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Pending", Reason="", readiness=false. Elapsed: 235.195132ms Apr 16 04:21:38.515: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470361523s Apr 16 04:21:40.752: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707356161s Apr 16 04:21:42.991: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.946544077s Apr 16 04:21:45.228: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Running", Reason="", readiness=true. Elapsed: 9.183443711s Apr 16 04:21:47.463: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Running", Reason="", readiness=true. Elapsed: 11.418995903s Apr 16 04:21:49.700: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Running", Reason="", readiness=true. Elapsed: 13.655486465s Apr 16 04:21:51.935: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Running", Reason="", readiness=true. Elapsed: 15.890941997s Apr 16 04:21:54.178: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Running", Reason="", readiness=true. Elapsed: 18.133912037s Apr 16 04:21:56.414: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Running", Reason="", readiness=true. Elapsed: 20.36992525s Apr 16 04:21:58.650: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Running", Reason="", readiness=true. Elapsed: 22.605951238s Apr 16 04:22:00.893: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.848168972s [1mSTEP[0m: Saw pod success Apr 16 04:22:00.893: INFO: Pod "pod-subpath-test-preprovisionedpv-h65p" satisfied condition "Succeeded or Failed" Apr 16 04:22:01.129: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-h65p container test-container-subpath-preprovisionedpv-h65p: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:01.645: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h65p to disappear Apr 16 04:22:01.881: INFO: Pod pod-subpath-test-preprovisionedpv-h65p no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-h65p Apr 16 04:22:01.881: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h65p" in namespace "provisioning-9372" ... skipping 26 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":52,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:22:00.637: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Apr 16 04:22:02.081: INFO: Waiting up to 5m0s for pod "downward-api-85988519-3add-45ad-a6cb-7e476d1a5417" in namespace "downward-api-3823" to be "Succeeded or Failed" Apr 16 04:22:02.330: INFO: Pod "downward-api-85988519-3add-45ad-a6cb-7e476d1a5417": Phase="Pending", Reason="", readiness=false. Elapsed: 248.314821ms Apr 16 04:22:04.569: INFO: Pod "downward-api-85988519-3add-45ad-a6cb-7e476d1a5417": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487762705s Apr 16 04:22:06.809: INFO: Pod "downward-api-85988519-3add-45ad-a6cb-7e476d1a5417": Phase="Pending", Reason="", readiness=false. Elapsed: 4.727096771s Apr 16 04:22:09.049: INFO: Pod "downward-api-85988519-3add-45ad-a6cb-7e476d1a5417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.967370604s [1mSTEP[0m: Saw pod success Apr 16 04:22:09.049: INFO: Pod "downward-api-85988519-3add-45ad-a6cb-7e476d1a5417" satisfied condition "Succeeded or Failed" Apr 16 04:22:09.288: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod downward-api-85988519-3add-45ad-a6cb-7e476d1a5417 container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:09.781: INFO: Waiting for pod downward-api-85988519-3add-45ad-a6cb-7e476d1a5417 to disappear Apr 16 04:22:10.020: INFO: Pod downward-api-85988519-3add-45ad-a6cb-7e476d1a5417 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.864 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:10.526: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 30 lines ... Apr 16 04:21:40.169: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-67834gv8c [1mSTEP[0m: creating a claim Apr 16 04:21:40.404: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-dk9w [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:21:41.112: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dk9w" in namespace "provisioning-6783" to be "Succeeded or Failed" Apr 16 04:21:41.347: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Pending", Reason="", readiness=false. Elapsed: 234.601401ms Apr 16 04:21:43.583: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470123882s Apr 16 04:21:45.818: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.705607952s Apr 16 04:21:48.053: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.940694322s Apr 16 04:21:50.290: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Pending", Reason="", readiness=false. Elapsed: 9.177591888s Apr 16 04:21:52.526: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Pending", Reason="", readiness=false. Elapsed: 11.413786399s Apr 16 04:21:54.762: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Pending", Reason="", readiness=false. Elapsed: 13.650033931s Apr 16 04:21:56.999: INFO: Pod "pod-subpath-test-dynamicpv-dk9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.886379434s [1mSTEP[0m: Saw pod success Apr 16 04:21:56.999: INFO: Pod "pod-subpath-test-dynamicpv-dk9w" satisfied condition "Succeeded or Failed" Apr 16 04:21:57.234: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-dk9w container test-container-subpath-dynamicpv-dk9w: <nil> [1mSTEP[0m: delete the pod Apr 16 04:21:57.715: INFO: Waiting for pod pod-subpath-test-dynamicpv-dk9w to disappear Apr 16 04:21:57.951: INFO: Pod pod-subpath-test-dynamicpv-dk9w no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-dk9w Apr 16 04:21:57.951: INFO: Deleting pod "pod-subpath-test-dynamicpv-dk9w" in namespace "provisioning-6783" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":31,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:10.569: INFO: Driver local doesn't support ext3 -- skipping ... skipping 89 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:22:12.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-9058" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":43,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:12.830: INFO: Only supported for providers [vsphere] (not aws) ... skipping 177 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI FSGroupPolicy [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559[0m should modify fsGroup if fsGroupPolicy=File [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":1,"skipped":16,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:14.276: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 85 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and read from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":70,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:50.516: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename webhook [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 9 lines ... Apr 16 04:21:57.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Setting timeout (1s) shorter than webhook latency (5s) [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) [1mSTEP[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Having no error when timeout is longer than webhook latency [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [1mSTEP[0m: Having no error when timeout is empty (defaulted to 10s in v1) [1mSTEP[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:22:14.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-4171" for this suite. [1mSTEP[0m: Destroying namespace "webhook-4171-markers" for this suite. ... skipping 4 lines ... [32m• [SLOW TEST:25.606 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should honor timeout [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":3,"skipped":51,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:16.140: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 21 lines ... Apr 16 04:21:37.325: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 04:21:39.001: INFO: created pod Apr 16 04:21:39.001: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-7220" to be "Succeeded or Failed" Apr 16 04:21:39.242: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 240.277406ms Apr 16 04:21:41.481: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.479084056s Apr 16 04:21:43.721: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 4.719440876s Apr 16 04:21:45.968: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 6.966883903s Apr 16 04:21:48.212: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.210126082s [1mSTEP[0m: Saw pod success Apr 16 04:21:48.212: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Apr 16 04:22:18.213: INFO: polling logs Apr 16 04:22:18.501: INFO: Pod logs: 2022/04/16 04:21:40 OK: Got token 2022/04/16 04:21:40 validating with in-cluster discovery 2022/04/16 04:21:40 OK: got issuer https://api.internal.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io 2022/04/16 04:21:40 Full, not-validated claims: ... skipping 14 lines ... [32m• [SLOW TEST:41.894 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:19.238: INFO: Only supported for providers [vsphere] (not aws) ... skipping 154 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 20 lines ... Apr 16 04:21:47.642: INFO: PersistentVolumeClaim pvc-fxlm7 found but phase is Pending instead of Bound. Apr 16 04:21:49.880: INFO: PersistentVolumeClaim pvc-fxlm7 found and phase=Bound (11.445781785s) Apr 16 04:21:49.880: INFO: Waiting up to 3m0s for PersistentVolume local-2lr49 to have phase Bound Apr 16 04:21:50.117: INFO: PersistentVolume local-2lr49 found and phase=Bound (236.987974ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5n6x [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Apr 16 04:21:50.830: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5n6x" in namespace "provisioning-7499" to be "Succeeded or Failed" Apr 16 04:21:51.068: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Pending", Reason="", readiness=false. Elapsed: 237.261531ms Apr 16 04:21:53.306: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 2.475044813s Apr 16 04:21:55.545: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 4.714275427s Apr 16 04:21:57.785: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 6.9541947s Apr 16 04:22:00.026: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 9.195048127s Apr 16 04:22:02.267: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 11.436676283s Apr 16 04:22:04.506: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 13.675959116s Apr 16 04:22:06.744: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 15.913883598s Apr 16 04:22:08.982: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 18.151767698s Apr 16 04:22:11.220: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Running", Reason="", readiness=true. Elapsed: 20.389915547s Apr 16 04:22:13.460: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.629764534s [1mSTEP[0m: Saw pod success Apr 16 04:22:13.460: INFO: Pod "pod-subpath-test-preprovisionedpv-5n6x" satisfied condition "Succeeded or Failed" Apr 16 04:22:13.698: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-5n6x container test-container-subpath-preprovisionedpv-5n6x: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:14.181: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5n6x to disappear Apr 16 04:22:14.421: INFO: Pod pod-subpath-test-preprovisionedpv-5n6x no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5n6x Apr 16 04:22:14.422: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5n6x" in namespace "provisioning-7499" ... skipping 22 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":23,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:19.488: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 50 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m Container restart [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130[0m should verify that container can restart successfully after configmaps modified [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":2,"skipped":15,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:22:23.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-6704" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:23.564: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 142 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":10,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:24.131: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 43 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 16 04:22:20.719: INFO: Waiting up to 5m0s for pod "pod-7a25c4c4-ac85-48bf-886f-846b5ad37250" in namespace "emptydir-1286" to be "Succeeded or Failed" Apr 16 04:22:20.958: INFO: Pod "pod-7a25c4c4-ac85-48bf-886f-846b5ad37250": Phase="Pending", Reason="", readiness=false. Elapsed: 239.206454ms Apr 16 04:22:23.199: INFO: Pod "pod-7a25c4c4-ac85-48bf-886f-846b5ad37250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.479639872s [1mSTEP[0m: Saw pod success Apr 16 04:22:23.199: INFO: Pod "pod-7a25c4c4-ac85-48bf-886f-846b5ad37250" satisfied condition "Succeeded or Failed" Apr 16 04:22:23.437: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-7a25c4c4-ac85-48bf-886f-846b5ad37250 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:23.928: INFO: Waiting for pod pod-7a25c4c4-ac85-48bf-886f-846b5ad37250 to disappear Apr 16 04:22:24.167: INFO: Pod pod-7a25c4c4-ac85-48bf-886f-846b5ad37250 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48[0m new files should be created with FSGroup ownership when container is non-root [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":5,"skipped":19,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 15 lines ... [32m• [SLOW TEST:11.318 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m binary data should be reflected in volume [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":53,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:121 [1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl [1mSTEP[0m: Watching for error events or started pod [1mSTEP[0m: Waiting for pod completion [1mSTEP[0m: Checking that the pod succeeded [1mSTEP[0m: Getting logs from the pod [1mSTEP[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:22:27.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-7193" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]","total":-1,"completed":6,"skipped":31,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 22 lines ... [32m• [SLOW TEST:16.723 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for the cluster [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":7,"skipped":56,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 81 lines ... Apr 16 04:21:31.423: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:32.939: INFO: Exec stderr: "" Apr 16 04:21:35.656: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-3444"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-3444"/host; echo host > "/var/lib/kubelet/mount-propagation-3444"/host/file] Namespace:mount-propagation-3444 PodName:hostexec-ip-172-20-50-117.ap-south-1.compute.internal-2pgnz ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 16 04:21:35.656: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:37.394: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3444 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:37.394: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:38.914: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil> Apr 16 04:21:39.149: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3444 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:39.149: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:40.652: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:21:40.886: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3444 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:40.887: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:42.372: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:21:42.607: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3444 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:42.607: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:44.102: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:21:44.337: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3444 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:44.337: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:45.887: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil> Apr 16 04:21:46.124: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3444 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:46.124: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:47.631: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil> Apr 16 04:21:47.866: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3444 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:47.866: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:49.466: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil> Apr 16 04:21:49.701: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3444 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:49.702: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:51.282: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:21:51.516: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3444 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:51.516: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:53.028: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:21:53.264: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3444 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:53.264: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:54.795: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil> Apr 16 04:21:55.029: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3444 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:55.029: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:56.542: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:21:56.777: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3444 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:56.777: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:21:58.294: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:21:58.529: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3444 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:21:58.529: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:00.144: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil> Apr 16 04:22:00.379: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3444 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:22:00.380: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:01.877: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:22:02.113: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3444 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:22:02.113: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:03.625: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:22:03.864: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3444 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:22:03.865: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:05.371: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:22:05.606: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3444 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:22:05.606: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:07.157: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:22:07.392: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3444 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:22:07.392: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:08.927: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:22:09.163: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3444 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:22:09.163: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:10.667: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil> Apr 16 04:22:10.902: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3444 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 04:22:10.902: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:12.412: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 16 04:22:12.412: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-3444 PodName:hostexec-ip-172-20-50-117.ap-south-1.compute.internal-2pgnz ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 16 04:22:12.412: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:13.963: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 3245 -m cat "/var/lib/kubelet/mount-propagation-3444/host/file"] Namespace:mount-propagation-3444 PodName:hostexec-ip-172-20-50-117.ap-south-1.compute.internal-2pgnz ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 16 04:22:13.964: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:22:15.463: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 3245 -m cat "/var/lib/kubelet/mount-propagation-3444/master/file"] Namespace:mount-propagation-3444 PodName:hostexec-ip-172-20-50-117.ap-south-1.compute.internal-2pgnz ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 16 04:22:15.463: INFO: >>> kubeConfig: /root/.kube/config ... skipping 29 lines ... [32m• [SLOW TEST:128.455 seconds][0m [sig-node] Mount propagation [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should propagate mounts within defined scopes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:83[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":1,"skipped":6,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:32.916: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 38 lines ... Apr 16 04:22:17.732: INFO: PersistentVolumeClaim pvc-hcntq found but phase is Pending instead of Bound. Apr 16 04:22:19.968: INFO: PersistentVolumeClaim pvc-hcntq found and phase=Bound (2.470639602s) Apr 16 04:22:19.968: INFO: Waiting up to 3m0s for PersistentVolume local-nccwk to have phase Bound Apr 16 04:22:20.204: INFO: PersistentVolume local-nccwk found and phase=Bound (236.004597ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vhlk [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:22:20.912: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vhlk" in namespace "provisioning-882" to be "Succeeded or Failed" Apr 16 04:22:21.147: INFO: Pod "pod-subpath-test-preprovisionedpv-vhlk": Phase="Pending", Reason="", readiness=false. Elapsed: 235.30303ms Apr 16 04:22:23.383: INFO: Pod "pod-subpath-test-preprovisionedpv-vhlk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471548786s Apr 16 04:22:25.619: INFO: Pod "pod-subpath-test-preprovisionedpv-vhlk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.707160116s [1mSTEP[0m: Saw pod success Apr 16 04:22:25.619: INFO: Pod "pod-subpath-test-preprovisionedpv-vhlk" satisfied condition "Succeeded or Failed" Apr 16 04:22:25.854: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-vhlk container test-container-volume-preprovisionedpv-vhlk: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:26.330: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vhlk to disappear Apr 16 04:22:26.565: INFO: Pod pod-subpath-test-preprovisionedpv-vhlk no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vhlk Apr 16 04:22:26.565: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vhlk" in namespace "provisioning-882" ... skipping 164 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should resize volume when PVC is edited while pod is using it [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":1,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:35.575: INFO: Only supported for providers [openstack] (not aws) ... skipping 93 lines ... Apr 16 04:21:57.576: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-7jfw6] to have phase Bound Apr 16 04:21:57.817: INFO: PersistentVolumeClaim pvc-7jfw6 found and phase=Bound (241.784105ms) [1mSTEP[0m: Deleting the previously created pod Apr 16 04:22:07.011: INFO: Deleting pod "pvc-volume-tester-pq46l" in namespace "csi-mock-volumes-933" Apr 16 04:22:07.250: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pq46l" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Apr 16 04:22:09.979: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5a25a423-b9dc-4fc8-981d-41ff862bedf1/volumes/kubernetes.io~csi/pvc-1fa7a5c7-2640-4936-88b2-eddfa62c8290/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-pq46l Apr 16 04:22:09.979: INFO: Deleting pod "pvc-volume-tester-pq46l" in namespace "csi-mock-volumes-933" [1mSTEP[0m: Deleting claim pvc-7jfw6 Apr 16 04:22:10.692: INFO: Waiting up to 2m0s for PersistentVolume pvc-1fa7a5c7-2640-4936-88b2-eddfa62c8290 to get deleted Apr 16 04:22:10.936: INFO: PersistentVolume pvc-1fa7a5c7-2640-4936-88b2-eddfa62c8290 found and phase=Released (244.087391ms) Apr 16 04:22:13.175: INFO: PersistentVolume pvc-1fa7a5c7-2640-4936-88b2-eddfa62c8290 found and phase=Released (2.483083326s) ... skipping 46 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI workload information using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444[0m should not be passed when podInfoOnMount=false [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":5,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode ... skipping 66 lines ... Apr 16 04:22:35.613: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test override arguments Apr 16 04:22:37.038: INFO: Waiting up to 5m0s for pod "client-containers-51bb6e14-6865-4a7d-bc42-341aa442690b" in namespace "containers-2575" to be "Succeeded or Failed" Apr 16 04:22:37.274: INFO: Pod "client-containers-51bb6e14-6865-4a7d-bc42-341aa442690b": Phase="Pending", Reason="", readiness=false. Elapsed: 236.576615ms Apr 16 04:22:39.513: INFO: Pod "client-containers-51bb6e14-6865-4a7d-bc42-341aa442690b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.474987691s [1mSTEP[0m: Saw pod success Apr 16 04:22:39.513: INFO: Pod "client-containers-51bb6e14-6865-4a7d-bc42-341aa442690b" satisfied condition "Succeeded or Failed" Apr 16 04:22:39.750: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod client-containers-51bb6e14-6865-4a7d-bc42-341aa442690b container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:40.226: INFO: Waiting for pod client-containers-51bb6e14-6865-4a7d-bc42-341aa442690b to disappear Apr 16 04:22:40.462: INFO: Pod client-containers-51bb6e14-6865-4a7d-bc42-341aa442690b no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.323 seconds][0m [sig-node] Docker Containers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:40.948: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 64 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m With a server listening on localhost [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474[0m should support forwarding over websockets [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":8,"skipped":63,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:41.226: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 78 lines ... [36mDriver hostPath doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":2,"skipped":5,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:22:09.230: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename services [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 7 lines ... I0416 04:22:10.908393 6574 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-6763, replica count: 3 I0416 04:22:14.160202 6574 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 04:22:17.161099 6574 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 04:22:17.647: INFO: Creating new exec pod Apr 16 04:22:21.363: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6763 exec execpod-affinityl7gfn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Apr 16 04:22:28.710: INFO: rc: 1 Apr 16 04:22:28.710: INFO: Service reachability failing with error: error running /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6763 exec execpod-affinityl7gfn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: getaddrinfo: Try again command terminated with exit code 1 error: exit status 1 Retrying... Apr 16 04:22:29.711: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6763 exec execpod-affinityl7gfn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Apr 16 04:22:32.035: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Apr 16 04:22:32.035: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 04:22:32.036: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6763 exec execpod-affinityl7gfn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.66.248 80' ... skipping 33 lines ... [32m• [SLOW TEST:32.132 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":5,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:41.380: INFO: Only supported for providers [vsphere] (not aws) ... skipping 67 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97[0m should validate Statefulset Status endpoints [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:41.515: INFO: Driver csi-hostpath doesn't support ext4 -- skipping ... skipping 69 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97[0m should implement legacy replacement when the update strategy is OnDelete [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:503[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":2,"skipped":23,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:41.628: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 81 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":43,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 61 lines ... Apr 16 04:22:14.911: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194 Apr 16 04:22:16.087: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Apr 16 04:22:16.559: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5694" in namespace "provisioning-5694" to be "Succeeded or Failed" Apr 16 04:22:16.793: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Pending", Reason="", readiness=false. Elapsed: 234.170705ms Apr 16 04:22:19.030: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47121216s Apr 16 04:22:21.265: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Pending", Reason="", readiness=false. Elapsed: 4.70630901s Apr 16 04:22:23.500: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Pending", Reason="", readiness=false. Elapsed: 6.941318624s Apr 16 04:22:25.736: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.176440444s [1mSTEP[0m: Saw pod success Apr 16 04:22:25.736: INFO: Pod "hostpath-symlink-prep-provisioning-5694" satisfied condition "Succeeded or Failed" Apr 16 04:22:25.736: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5694" in namespace "provisioning-5694" Apr 16 04:22:25.974: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5694" to be fully deleted Apr 16 04:22:26.209: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-lwmw [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:22:26.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lwmw" in namespace "provisioning-5694" to be "Succeeded or Failed" Apr 16 04:22:26.678: INFO: Pod "pod-subpath-test-inlinevolume-lwmw": Phase="Pending", Reason="", readiness=false. Elapsed: 234.033712ms Apr 16 04:22:28.914: INFO: Pod "pod-subpath-test-inlinevolume-lwmw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469375474s Apr 16 04:22:31.148: INFO: Pod "pod-subpath-test-inlinevolume-lwmw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.704263881s Apr 16 04:22:33.386: INFO: Pod "pod-subpath-test-inlinevolume-lwmw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.941570791s Apr 16 04:22:35.624: INFO: Pod "pod-subpath-test-inlinevolume-lwmw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180294329s Apr 16 04:22:37.865: INFO: Pod "pod-subpath-test-inlinevolume-lwmw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.421083193s [1mSTEP[0m: Saw pod success Apr 16 04:22:37.865: INFO: Pod "pod-subpath-test-inlinevolume-lwmw" satisfied condition "Succeeded or Failed" Apr 16 04:22:38.102: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-lwmw container test-container-volume-inlinevolume-lwmw: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:38.580: INFO: Waiting for pod pod-subpath-test-inlinevolume-lwmw to disappear Apr 16 04:22:38.814: INFO: Pod pod-subpath-test-inlinevolume-lwmw no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-lwmw Apr 16 04:22:38.814: INFO: Deleting pod "pod-subpath-test-inlinevolume-lwmw" in namespace "provisioning-5694" [1mSTEP[0m: Deleting pod Apr 16 04:22:39.048: INFO: Deleting pod "pod-subpath-test-inlinevolume-lwmw" in namespace "provisioning-5694" Apr 16 04:22:39.518: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5694" in namespace "provisioning-5694" to be "Succeeded or Failed" Apr 16 04:22:39.751: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Pending", Reason="", readiness=false. Elapsed: 233.707124ms Apr 16 04:22:41.987: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468844579s Apr 16 04:22:44.222: INFO: Pod "hostpath-symlink-prep-provisioning-5694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.704145212s [1mSTEP[0m: Saw pod success Apr 16 04:22:44.222: INFO: Pod "hostpath-symlink-prep-provisioning-5694" satisfied condition "Succeeded or Failed" Apr 16 04:22:44.222: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5694" in namespace "provisioning-5694" Apr 16 04:22:44.459: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5694" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:22:44.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-5694" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":72,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:20.594 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a secret. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":6,"skipped":22,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:45.282: INFO: Only supported for providers [azure] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 62 lines ... Apr 16 04:22:20.034: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support readOnly directory specified in the volumeMount /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365 Apr 16 04:22:21.212: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Apr 16 04:22:21.685: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5271" in namespace "provisioning-5271" to be "Succeeded or Failed" Apr 16 04:22:21.921: INFO: Pod "hostpath-symlink-prep-provisioning-5271": Phase="Pending", Reason="", readiness=false. Elapsed: 235.694168ms Apr 16 04:22:24.157: INFO: Pod "hostpath-symlink-prep-provisioning-5271": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47166098s Apr 16 04:22:26.394: INFO: Pod "hostpath-symlink-prep-provisioning-5271": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708075886s Apr 16 04:22:28.632: INFO: Pod "hostpath-symlink-prep-provisioning-5271": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.946335858s [1mSTEP[0m: Saw pod success Apr 16 04:22:28.632: INFO: Pod "hostpath-symlink-prep-provisioning-5271" satisfied condition "Succeeded or Failed" Apr 16 04:22:28.632: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5271" in namespace "provisioning-5271" Apr 16 04:22:28.871: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5271" to be fully deleted Apr 16 04:22:29.106: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-bntb [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:22:29.347: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-bntb" in namespace "provisioning-5271" to be "Succeeded or Failed" Apr 16 04:22:29.590: INFO: Pod "pod-subpath-test-inlinevolume-bntb": Phase="Pending", Reason="", readiness=false. Elapsed: 243.051561ms Apr 16 04:22:31.826: INFO: Pod "pod-subpath-test-inlinevolume-bntb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479030569s Apr 16 04:22:34.063: INFO: Pod "pod-subpath-test-inlinevolume-bntb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.716064737s Apr 16 04:22:36.301: INFO: Pod "pod-subpath-test-inlinevolume-bntb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953208889s Apr 16 04:22:38.537: INFO: Pod "pod-subpath-test-inlinevolume-bntb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.189294879s [1mSTEP[0m: Saw pod success Apr 16 04:22:38.537: INFO: Pod "pod-subpath-test-inlinevolume-bntb" satisfied condition "Succeeded or Failed" Apr 16 04:22:38.772: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-bntb container test-container-subpath-inlinevolume-bntb: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:39.259: INFO: Waiting for pod pod-subpath-test-inlinevolume-bntb to disappear Apr 16 04:22:39.498: INFO: Pod pod-subpath-test-inlinevolume-bntb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-bntb Apr 16 04:22:39.499: INFO: Deleting pod "pod-subpath-test-inlinevolume-bntb" in namespace "provisioning-5271" [1mSTEP[0m: Deleting pod Apr 16 04:22:39.734: INFO: Deleting pod "pod-subpath-test-inlinevolume-bntb" in namespace "provisioning-5271" Apr 16 04:22:40.210: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5271" in namespace "provisioning-5271" to be "Succeeded or Failed" Apr 16 04:22:40.445: INFO: Pod "hostpath-symlink-prep-provisioning-5271": Phase="Pending", Reason="", readiness=false. Elapsed: 235.236184ms Apr 16 04:22:42.682: INFO: Pod "hostpath-symlink-prep-provisioning-5271": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47189558s Apr 16 04:22:44.917: INFO: Pod "hostpath-symlink-prep-provisioning-5271": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.707693511s [1mSTEP[0m: Saw pod success Apr 16 04:22:44.917: INFO: Pod "hostpath-symlink-prep-provisioning-5271" satisfied condition "Succeeded or Failed" Apr 16 04:22:44.917: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5271" in namespace "provisioning-5271" Apr 16 04:22:45.157: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5271" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:22:45.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "provisioning-5271" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":17,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:45.885: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 65 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41[0m on terminated container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134[0m should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:46.894: INFO: Only supported for providers [gce gke] (not aws) ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:22:51.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-7002" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:51.798: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 110 lines ... Apr 16 04:22:45.986: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 16 04:22:47.428: INFO: Waiting up to 5m0s for pod "security-context-037818f4-91a1-451f-ae43-5c58aa3acd33" in namespace "security-context-6172" to be "Succeeded or Failed" Apr 16 04:22:47.663: INFO: Pod "security-context-037818f4-91a1-451f-ae43-5c58aa3acd33": Phase="Pending", Reason="", readiness=false. Elapsed: 235.221694ms Apr 16 04:22:49.900: INFO: Pod "security-context-037818f4-91a1-451f-ae43-5c58aa3acd33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47234068s Apr 16 04:22:52.137: INFO: Pod "security-context-037818f4-91a1-451f-ae43-5c58aa3acd33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.708685542s [1mSTEP[0m: Saw pod success Apr 16 04:22:52.137: INFO: Pod "security-context-037818f4-91a1-451f-ae43-5c58aa3acd33" satisfied condition "Succeeded or Failed" Apr 16 04:22:52.372: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod security-context-037818f4-91a1-451f-ae43-5c58aa3acd33 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:52.856: INFO: Waiting for pod security-context-037818f4-91a1-451f-ae43-5c58aa3acd33 to disappear Apr 16 04:22:53.092: INFO: Pod security-context-037818f4-91a1-451f-ae43-5c58aa3acd33 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.579 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":55,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:22:32.946: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename resourcequota [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 22 lines ... [32m• [SLOW TEST:20.986 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with best effort scope. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:53.966: INFO: Only supported for providers [gce gke] (not aws) ... skipping 28 lines ... [sig-storage] CSI Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: csi-hostpath] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver "csi-hostpath" does not support topology - skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92 [90m------------------------------[0m ... skipping 53 lines ... [32m• [SLOW TEST:21.001 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should verify ResourceQuota with terminating scopes. [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:54.127: INFO: Driver local doesn't support ext4 -- skipping ... skipping 53 lines ... Apr 16 04:22:13.440: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-5398x6psb [1mSTEP[0m: creating a claim Apr 16 04:22:13.680: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-pr9x [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:22:14.413: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-pr9x" in namespace "provisioning-5398" to be "Succeeded or Failed" Apr 16 04:22:14.652: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 239.001106ms Apr 16 04:22:16.891: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478263921s Apr 16 04:22:19.131: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718751503s Apr 16 04:22:21.371: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958425992s Apr 16 04:22:23.611: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 9.198403264s Apr 16 04:22:25.850: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 11.437866429s Apr 16 04:22:28.093: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.680193083s [1mSTEP[0m: Saw pod success Apr 16 04:22:28.093: INFO: Pod "pod-subpath-test-dynamicpv-pr9x" satisfied condition "Succeeded or Failed" Apr 16 04:22:28.332: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-pr9x container test-container-subpath-dynamicpv-pr9x: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:28.820: INFO: Waiting for pod pod-subpath-test-dynamicpv-pr9x to disappear Apr 16 04:22:29.061: INFO: Pod pod-subpath-test-dynamicpv-pr9x no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-pr9x Apr 16 04:22:29.061: INFO: Deleting pod "pod-subpath-test-dynamicpv-pr9x" in namespace "provisioning-5398" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-pr9x [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:22:29.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-pr9x" in namespace "provisioning-5398" to be "Succeeded or Failed" Apr 16 04:22:29.787: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 238.705826ms Apr 16 04:22:32.027: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478832422s Apr 16 04:22:34.266: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718406625s Apr 16 04:22:36.506: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.957969141s Apr 16 04:22:38.746: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 9.198254297s Apr 16 04:22:40.985: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Pending", Reason="", readiness=false. Elapsed: 11.437136292s Apr 16 04:22:43.225: INFO: Pod "pod-subpath-test-dynamicpv-pr9x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.677065165s [1mSTEP[0m: Saw pod success Apr 16 04:22:43.225: INFO: Pod "pod-subpath-test-dynamicpv-pr9x" satisfied condition "Succeeded or Failed" Apr 16 04:22:43.465: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-pr9x container test-container-subpath-dynamicpv-pr9x: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:43.951: INFO: Waiting for pod pod-subpath-test-dynamicpv-pr9x to disappear Apr 16 04:22:44.190: INFO: Pod pod-subpath-test-dynamicpv-pr9x no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-pr9x Apr 16 04:22:44.191: INFO: Deleting pod "pod-subpath-test-dynamicpv-pr9x" in namespace "provisioning-5398" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":23,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:56.884: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 35 lines ... [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:50.546: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename crd-watch [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 23 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m CustomResourceDefinition Watch [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42[0m watch on custom resource definition objects [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:57.817: INFO: Only supported for providers [azure] (not aws) ... skipping 86 lines ... Apr 16 04:22:47.472: INFO: PersistentVolumeClaim pvc-4wnf9 found but phase is Pending instead of Bound. Apr 16 04:22:49.710: INFO: PersistentVolumeClaim pvc-4wnf9 found and phase=Bound (6.964926751s) Apr 16 04:22:49.710: INFO: Waiting up to 3m0s for PersistentVolume local-hkqzq to have phase Bound Apr 16 04:22:49.947: INFO: PersistentVolume local-hkqzq found and phase=Bound (237.186892ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-s9dj [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:22:50.668: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s9dj" in namespace "provisioning-2696" to be "Succeeded or Failed" Apr 16 04:22:50.911: INFO: Pod "pod-subpath-test-preprovisionedpv-s9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 242.555492ms Apr 16 04:22:53.148: INFO: Pod "pod-subpath-test-preprovisionedpv-s9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480100758s Apr 16 04:22:55.387: INFO: Pod "pod-subpath-test-preprovisionedpv-s9dj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.718264021s [1mSTEP[0m: Saw pod success Apr 16 04:22:55.387: INFO: Pod "pod-subpath-test-preprovisionedpv-s9dj" satisfied condition "Succeeded or Failed" Apr 16 04:22:55.625: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-s9dj container test-container-volume-preprovisionedpv-s9dj: <nil> [1mSTEP[0m: delete the pod Apr 16 04:22:56.129: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s9dj to disappear Apr 16 04:22:56.368: INFO: Pod pod-subpath-test-preprovisionedpv-s9dj no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-s9dj Apr 16 04:22:56.368: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s9dj" in namespace "provisioning-2696" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":53,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:59.614: INFO: Only supported for providers [openstack] (not aws) ... skipping 51 lines ... [32m• [SLOW TEST:14.702 seconds][0m [sig-api-machinery] Watchers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should observe an object deletion if it stops meeting the requirements of the selector [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:22:59.922: INFO: Only supported for providers [azure] (not aws) ... skipping 49 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: cinder] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [openstack] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092 [90m------------------------------[0m ... skipping 89 lines ... [32m• [SLOW TEST:18.962 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:01.097: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 37 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:00.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-4340" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 109 lines ... [1mSTEP[0m: Destroying namespace "services-359" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":7,"skipped":63,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 256 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263 [90m------------------------------[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":8,"skipped":36,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:23:00.299: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:23:01.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57552c5d-30b2-453c-8a18-deaad5cf75f7" in namespace "projected-7307" to be "Succeeded or Failed" Apr 16 04:23:01.976: INFO: Pod "downwardapi-volume-57552c5d-30b2-453c-8a18-deaad5cf75f7": Phase="Pending", Reason="", readiness=false. Elapsed: 242.784314ms Apr 16 04:23:04.215: INFO: Pod "downwardapi-volume-57552c5d-30b2-453c-8a18-deaad5cf75f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.481578743s [1mSTEP[0m: Saw pod success Apr 16 04:23:04.215: INFO: Pod "downwardapi-volume-57552c5d-30b2-453c-8a18-deaad5cf75f7" satisfied condition "Succeeded or Failed" Apr 16 04:23:04.454: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod downwardapi-volume-57552c5d-30b2-453c-8a18-deaad5cf75f7 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:04.938: INFO: Waiting for pod downwardapi-volume-57552c5d-30b2-453c-8a18-deaad5cf75f7 to disappear Apr 16 04:23:05.181: INFO: Pod downwardapi-volume-57552c5d-30b2-453c-8a18-deaad5cf75f7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.365 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide container's memory limit [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":25,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":26,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:22:39.751: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename statefulset [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 52 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:06.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-414" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":8,"skipped":87,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 34 lines ... [32m• [SLOW TEST:6.939 seconds][0m [sig-apps] Deployment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Deployment should have a working scale subresource [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":9,"skipped":108,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:08.709: INFO: Driver "csi-hostpath" does not support FsGroup - skipping ... skipping 84 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97[0m should not deadlock when a pod's predecessor fails [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:252[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":2,"skipped":5,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:09.449: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 22 lines ... Apr 16 04:22:19.287: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename volume [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should store data /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159 Apr 16 04:22:20.486: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Apr 16 04:22:20.968: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2689" in namespace "volume-2689" to be "Succeeded or Failed" Apr 16 04:22:21.209: INFO: Pod "hostpath-symlink-prep-volume-2689": Phase="Pending", Reason="", readiness=false. Elapsed: 241.325833ms Apr 16 04:22:23.450: INFO: Pod "hostpath-symlink-prep-volume-2689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482258553s Apr 16 04:22:25.690: INFO: Pod "hostpath-symlink-prep-volume-2689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.722306374s [1mSTEP[0m: Saw pod success Apr 16 04:22:25.690: INFO: Pod "hostpath-symlink-prep-volume-2689" satisfied condition "Succeeded or Failed" Apr 16 04:22:25.690: INFO: Deleting pod "hostpath-symlink-prep-volume-2689" in namespace "volume-2689" Apr 16 04:22:25.933: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2689" to be fully deleted Apr 16 04:22:26.171: INFO: Creating resource for inline volume [1mSTEP[0m: starting hostpathsymlink-injector [1mSTEP[0m: Writing text file contents in the container. Apr 16 04:22:34.899: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-2689 exec hostpathsymlink-injector --namespace=volume-2689 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-2689' > /opt/0/index.html' ... skipping 30 lines ... Apr 16 04:22:59.760: INFO: Pod hostpathsymlink-client still exists Apr 16 04:23:01.760: INFO: Waiting for pod hostpathsymlink-client to disappear Apr 16 04:23:02.000: INFO: Pod hostpathsymlink-client still exists Apr 16 04:23:03.761: INFO: Waiting for pod hostpathsymlink-client to disappear Apr 16 04:23:04.003: INFO: Pod hostpathsymlink-client no longer exists [1mSTEP[0m: cleaning the environment after hostpathsymlink Apr 16 04:23:04.245: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2689" in namespace "volume-2689" to be "Succeeded or Failed" Apr 16 04:23:04.485: INFO: Pod "hostpath-symlink-prep-volume-2689": Phase="Pending", Reason="", readiness=false. Elapsed: 239.41487ms Apr 16 04:23:06.730: INFO: Pod "hostpath-symlink-prep-volume-2689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484043474s Apr 16 04:23:08.971: INFO: Pod "hostpath-symlink-prep-volume-2689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.725241678s [1mSTEP[0m: Saw pod success Apr 16 04:23:08.971: INFO: Pod "hostpath-symlink-prep-volume-2689" satisfied condition "Succeeded or Failed" Apr 16 04:23:08.971: INFO: Deleting pod "hostpath-symlink-prep-volume-2689" in namespace "volume-2689" Apr 16 04:23:09.215: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2689" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:09.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-2689" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":5,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:09.954: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 47 lines ... Apr 16 04:23:03.028: INFO: PersistentVolumeClaim pvc-m4l6h found but phase is Pending instead of Bound. Apr 16 04:23:05.272: INFO: PersistentVolumeClaim pvc-m4l6h found and phase=Bound (15.912038965s) Apr 16 04:23:05.272: INFO: Waiting up to 3m0s for PersistentVolume local-s986g to have phase Bound Apr 16 04:23:05.509: INFO: PersistentVolume local-s986g found and phase=Bound (237.280617ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-2zfd [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:23:06.229: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2zfd" in namespace "provisioning-4950" to be "Succeeded or Failed" Apr 16 04:23:06.466: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfd": Phase="Pending", Reason="", readiness=false. Elapsed: 237.353527ms Apr 16 04:23:08.706: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.476478742s [1mSTEP[0m: Saw pod success Apr 16 04:23:08.706: INFO: Pod "pod-subpath-test-preprovisionedpv-2zfd" satisfied condition "Succeeded or Failed" Apr 16 04:23:08.943: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-2zfd container test-container-subpath-preprovisionedpv-2zfd: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:09.431: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2zfd to disappear Apr 16 04:23:09.668: INFO: Pod pod-subpath-test-preprovisionedpv-2zfd no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-2zfd Apr 16 04:23:09.668: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2zfd" in namespace "provisioning-4950" ... skipping 24 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":28,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:15.963: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 61 lines ... [32m• [SLOW TEST:9.814 seconds][0m [sig-storage] EmptyDir wrapper volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m should not conflict [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":9,"skipped":92,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:16.635: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 25 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: vsphere] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [vsphere] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438 [90m------------------------------[0m ... skipping 82 lines ... [It] should support readOnly directory specified in the volumeMount /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365 Apr 16 04:23:05.610: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Apr 16 04:23:05.610: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-wq6b [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:23:05.851: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-wq6b" in namespace "provisioning-8074" to be "Succeeded or Failed" Apr 16 04:23:06.092: INFO: Pod "pod-subpath-test-inlinevolume-wq6b": Phase="Pending", Reason="", readiness=false. Elapsed: 241.321304ms Apr 16 04:23:08.332: INFO: Pod "pod-subpath-test-inlinevolume-wq6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480920037s Apr 16 04:23:10.571: INFO: Pod "pod-subpath-test-inlinevolume-wq6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.720100627s Apr 16 04:23:12.811: INFO: Pod "pod-subpath-test-inlinevolume-wq6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.960161474s Apr 16 04:23:15.054: INFO: Pod "pod-subpath-test-inlinevolume-wq6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.20253453s [1mSTEP[0m: Saw pod success Apr 16 04:23:15.054: INFO: Pod "pod-subpath-test-inlinevolume-wq6b" satisfied condition "Succeeded or Failed" Apr 16 04:23:15.292: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-wq6b container test-container-subpath-inlinevolume-wq6b: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:15.776: INFO: Waiting for pod pod-subpath-test-inlinevolume-wq6b to disappear Apr 16 04:23:16.014: INFO: Pod pod-subpath-test-inlinevolume-wq6b no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-wq6b Apr 16 04:23:16.014: INFO: Deleting pod "pod-subpath-test-inlinevolume-wq6b" in namespace "provisioning-8074" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:17.034: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 46 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:23:10.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887" in namespace "projected-5922" to be "Succeeded or Failed" Apr 16 04:23:11.164: INFO: Pod "downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887": Phase="Pending", Reason="", readiness=false. Elapsed: 239.418119ms Apr 16 04:23:13.402: INFO: Pod "downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477152511s Apr 16 04:23:15.644: INFO: Pod "downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.718929681s [1mSTEP[0m: Saw pod success Apr 16 04:23:15.644: INFO: Pod "downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887" satisfied condition "Succeeded or Failed" Apr 16 04:23:15.882: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:16.369: INFO: Waiting for pod downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887 to disappear Apr 16 04:23:16.606: INFO: Pod downwardapi-volume-d8f72d2d-4ecf-4994-a803-204340551887 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 162 lines ... Apr 16 04:22:53.069: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-5164k5th [1mSTEP[0m: creating a claim Apr 16 04:22:53.308: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-kjgh [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:22:54.025: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kjgh" in namespace "provisioning-516" to be "Succeeded or Failed" Apr 16 04:22:54.264: INFO: Pod "pod-subpath-test-dynamicpv-kjgh": Phase="Pending", Reason="", readiness=false. Elapsed: 238.742522ms Apr 16 04:22:56.502: INFO: Pod "pod-subpath-test-dynamicpv-kjgh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477455825s Apr 16 04:22:58.742: INFO: Pod "pod-subpath-test-dynamicpv-kjgh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.717151122s Apr 16 04:23:00.981: INFO: Pod "pod-subpath-test-dynamicpv-kjgh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.955815588s Apr 16 04:23:03.220: INFO: Pod "pod-subpath-test-dynamicpv-kjgh": Phase="Pending", Reason="", readiness=false. Elapsed: 9.195162881s Apr 16 04:23:05.458: INFO: Pod "pod-subpath-test-dynamicpv-kjgh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.43352303s [1mSTEP[0m: Saw pod success Apr 16 04:23:05.458: INFO: Pod "pod-subpath-test-dynamicpv-kjgh" satisfied condition "Succeeded or Failed" Apr 16 04:23:05.697: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-kjgh container test-container-subpath-dynamicpv-kjgh: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:06.200: INFO: Waiting for pod pod-subpath-test-dynamicpv-kjgh to disappear Apr 16 04:23:06.441: INFO: Pod pod-subpath-test-dynamicpv-kjgh no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-kjgh Apr 16 04:23:06.441: INFO: Deleting pod "pod-subpath-test-dynamicpv-kjgh" in namespace "provisioning-516" ... skipping 20 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":44,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:24.323: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 90 lines ... [32m• [SLOW TEST:7.198 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should provide DNS for pods for Hostname [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":113,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:25.928: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir-link] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 93 lines ... Apr 16 04:23:16.846: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test env composition Apr 16 04:23:18.284: INFO: Waiting up to 5m0s for pod "var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a" in namespace "var-expansion-2132" to be "Succeeded or Failed" Apr 16 04:23:18.526: INFO: Pod "var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a": Phase="Pending", Reason="", readiness=false. Elapsed: 241.617017ms Apr 16 04:23:20.765: INFO: Pod "var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480384552s Apr 16 04:23:23.004: INFO: Pod "var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719788998s Apr 16 04:23:25.243: INFO: Pod "var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.958823238s [1mSTEP[0m: Saw pod success Apr 16 04:23:25.243: INFO: Pod "var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a" satisfied condition "Succeeded or Failed" Apr 16 04:23:25.481: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:25.962: INFO: Waiting for pod var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a to disappear Apr 16 04:23:26.200: INFO: Pod var-expansion-1084fa71-1c51-4e83-96e9-3c07fc27727a no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.830 seconds][0m [sig-node] Variable Expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should allow composing env vars into new env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:23:17.096: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 [1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium Apr 16 04:23:18.526: INFO: Waiting up to 5m0s for pod "pod-e17da4c8-ce89-4b78-add7-418746431b47" in namespace "emptydir-8228" to be "Succeeded or Failed" Apr 16 04:23:18.764: INFO: Pod "pod-e17da4c8-ce89-4b78-add7-418746431b47": Phase="Pending", Reason="", readiness=false. Elapsed: 237.349086ms Apr 16 04:23:21.003: INFO: Pod "pod-e17da4c8-ce89-4b78-add7-418746431b47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4766743s Apr 16 04:23:23.242: INFO: Pod "pod-e17da4c8-ce89-4b78-add7-418746431b47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715118347s Apr 16 04:23:25.480: INFO: Pod "pod-e17da4c8-ce89-4b78-add7-418746431b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.953652719s [1mSTEP[0m: Saw pod success Apr 16 04:23:25.480: INFO: Pod "pod-e17da4c8-ce89-4b78-add7-418746431b47" satisfied condition "Succeeded or Failed" Apr 16 04:23:25.719: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-e17da4c8-ce89-4b78-add7-418746431b47 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:26.201: INFO: Waiting for pod pod-e17da4c8-ce89-4b78-add7-418746431b47 to disappear Apr 16 04:23:26.438: INFO: Pod pod-e17da4c8-ce89-4b78-add7-418746431b47 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48[0m volume on default medium should have the correct mode using FSGroup [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":4,"skipped":14,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 99 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI attach test using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317[0m should not require VolumeAttach for drivers without attachment [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":7,"skipped":34,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 76 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":15,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:30.140: INFO: Only supported for providers [azure] (not aws) ... skipping 14 lines ... [36mOnly supported for providers [azure] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1576 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0} [BeforeEach] [sig-api-machinery] Generated clientset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:23:27.610: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename clientset [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 9 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:30.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "clientset-4393" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":5,"skipped":36,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:30.713: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 150 lines ... Apr 16 04:23:16.586: INFO: PersistentVolumeClaim pvc-jnc2c found but phase is Pending instead of Bound. Apr 16 04:23:18.822: INFO: PersistentVolumeClaim pvc-jnc2c found and phase=Bound (11.413662844s) Apr 16 04:23:18.822: INFO: Waiting up to 3m0s for PersistentVolume local-7wwlk to have phase Bound Apr 16 04:23:19.057: INFO: PersistentVolume local-7wwlk found and phase=Bound (234.861159ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-7nq2 [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:23:19.762: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7nq2" in namespace "provisioning-3914" to be "Succeeded or Failed" Apr 16 04:23:19.997: INFO: Pod "pod-subpath-test-preprovisionedpv-7nq2": Phase="Pending", Reason="", readiness=false. Elapsed: 234.764064ms Apr 16 04:23:22.233: INFO: Pod "pod-subpath-test-preprovisionedpv-7nq2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470607356s Apr 16 04:23:24.469: INFO: Pod "pod-subpath-test-preprovisionedpv-7nq2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.706590419s Apr 16 04:23:26.705: INFO: Pod "pod-subpath-test-preprovisionedpv-7nq2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.942026457s [1mSTEP[0m: Saw pod success Apr 16 04:23:26.705: INFO: Pod "pod-subpath-test-preprovisionedpv-7nq2" satisfied condition "Succeeded or Failed" Apr 16 04:23:26.940: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-7nq2 container test-container-volume-preprovisionedpv-7nq2: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:27.422: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7nq2 to disappear Apr 16 04:23:27.657: INFO: Pod pod-subpath-test-preprovisionedpv-7nq2 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-7nq2 Apr 16 04:23:27.657: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7nq2" in namespace "provisioning-3914" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ServerSideApply /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 16 lines ... [32m• [SLOW TEST:13.905 seconds][0m [sig-api-machinery] ServerSideApply [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should work for CRDs [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":10,"skipped":55,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [32m• [SLOW TEST:5.719 seconds][0m [sig-network] IngressClass API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should support creating IngressClass API operations [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:32.667: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 147 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:33.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-6196" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:34.224: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: emptydir] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver emptydir doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 57 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:37.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-6713" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":7,"skipped":70,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:23:26.718: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-835e0e76-4451-4e9c-ba7b-dd2260ee442c [1mSTEP[0m: Creating a pod to test consume secrets Apr 16 04:23:28.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d" in namespace "projected-1981" to be "Succeeded or Failed" Apr 16 04:23:28.622: INFO: Pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 237.827864ms Apr 16 04:23:30.860: INFO: Pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475632764s Apr 16 04:23:33.099: INFO: Pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.714236652s Apr 16 04:23:35.337: INFO: Pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.95269421s Apr 16 04:23:37.576: INFO: Pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.191245878s Apr 16 04:23:39.813: INFO: Pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.428648232s [1mSTEP[0m: Saw pod success Apr 16 04:23:39.813: INFO: Pod "pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d" satisfied condition "Succeeded or Failed" Apr 16 04:23:40.052: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:40.534: INFO: Waiting for pod pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d to disappear Apr 16 04:23:40.772: INFO: Pod pod-projected-secrets-3e5ed804-a5f4-4f0a-8c68-5cfc01f67f7d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.534 seconds][0m [sig-storage] Projected secret [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:23:06.231: INFO: >>> kubeConfig: /root/.kube/config ... skipping 6 lines ... Apr 16 04:23:07.430: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-1698bvn2r [1mSTEP[0m: creating a claim Apr 16 04:23:07.669: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-vfgk [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:23:08.389: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vfgk" in namespace "provisioning-1698" to be "Succeeded or Failed" Apr 16 04:23:08.627: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 238.198632ms Apr 16 04:23:10.866: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476774207s Apr 16 04:23:13.104: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715356882s Apr 16 04:23:15.348: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959230538s Apr 16 04:23:17.588: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 9.198837909s Apr 16 04:23:19.828: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.439014575s Apr 16 04:23:22.069: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.680216022s Apr 16 04:23:24.309: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.920084387s Apr 16 04:23:26.550: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.160397915s Apr 16 04:23:28.792: INFO: Pod "pod-subpath-test-dynamicpv-vfgk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.402632532s [1mSTEP[0m: Saw pod success Apr 16 04:23:28.792: INFO: Pod "pod-subpath-test-dynamicpv-vfgk" satisfied condition "Succeeded or Failed" Apr 16 04:23:29.030: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-vfgk container test-container-subpath-dynamicpv-vfgk: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:29.514: INFO: Waiting for pod pod-subpath-test-dynamicpv-vfgk to disappear Apr 16 04:23:29.752: INFO: Pod pod-subpath-test-dynamicpv-vfgk no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-vfgk Apr 16 04:23:29.752: INFO: Deleting pod "pod-subpath-test-dynamicpv-vfgk" in namespace "provisioning-1698" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":26,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:42.403: INFO: Only supported for providers [vsphere] (not aws) ... skipping 98 lines ... Apr 16 04:23:30.987: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Apr 16 04:23:32.425: INFO: Waiting up to 5m0s for pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def" in namespace "downward-api-7905" to be "Succeeded or Failed" Apr 16 04:23:32.663: INFO: Pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def": Phase="Pending", Reason="", readiness=false. Elapsed: 238.695035ms Apr 16 04:23:34.904: INFO: Pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479602821s Apr 16 04:23:37.144: INFO: Pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719647444s Apr 16 04:23:39.384: INFO: Pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959650276s Apr 16 04:23:41.624: INFO: Pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def": Phase="Pending", Reason="", readiness=false. Elapsed: 9.199060553s Apr 16 04:23:43.863: INFO: Pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.438598813s [1mSTEP[0m: Saw pod success Apr 16 04:23:43.863: INFO: Pod "downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def" satisfied condition "Succeeded or Failed" Apr 16 04:23:44.102: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:44.597: INFO: Waiting for pod downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def to disappear Apr 16 04:23:44.836: INFO: Pod downward-api-db9a2527-27a8-46b1-b9f7-ebd7bdc00def no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.329 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide pod UID as env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":56,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:45.335: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 23 lines ... Apr 16 04:23:38.110: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Apr 16 04:23:39.538: INFO: Waiting up to 5m0s for pod "pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9" in namespace "emptydir-91" to be "Succeeded or Failed" Apr 16 04:23:39.775: INFO: Pod "pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9": Phase="Pending", Reason="", readiness=false. Elapsed: 237.209411ms Apr 16 04:23:42.013: INFO: Pod "pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475096445s Apr 16 04:23:44.251: INFO: Pod "pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.713099134s [1mSTEP[0m: Saw pod success Apr 16 04:23:44.251: INFO: Pod "pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9" satisfied condition "Succeeded or Failed" Apr 16 04:23:44.489: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:23:44.970: INFO: Waiting for pod pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9 to disappear Apr 16 04:23:45.209: INFO: Pod pod-37f263fe-6730-4cc4-b7e6-96ee4e2497a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.576 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":71,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 122 lines ... [32m• [SLOW TEST:86.015 seconds][0m [sig-apps] CronJob [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should remove from active list jobs that have been deleted [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:239[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":3,"skipped":13,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:50.204: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir-bindmounted] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 51 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m Verify if offline PVC expansion works [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":11,"skipped":70,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:51.482: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 38 lines ... [32m• [SLOW TEST:18.770 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should mount an API token into pods [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 9 lines ... Apr 16 04:23:52.252: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Apr 16 04:23:52.252: INFO: stdout: "scheduler etcd-1 etcd-0 controller-manager" [1mSTEP[0m: getting details of componentstatuses [1mSTEP[0m: getting status of scheduler Apr 16 04:23:52.252: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3148 get componentstatuses scheduler' Apr 16 04:23:53.044: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Apr 16 04:23:53.044: INFO: stdout: "NAME STATUS MESSAGE ERROR\nscheduler Healthy ok \n" [1mSTEP[0m: getting status of etcd-1 Apr 16 04:23:53.044: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3148 get componentstatuses etcd-1' Apr 16 04:23:53.837: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Apr 16 04:23:53.837: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-1 Healthy {\"health\":\"true\",\"reason\":\"\"} \n" [1mSTEP[0m: getting status of etcd-0 Apr 16 04:23:53.837: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3148 get componentstatuses etcd-0' Apr 16 04:23:54.631: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Apr 16 04:23:54.631: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-0 Healthy {\"health\":\"true\",\"reason\":\"\"} \n" [1mSTEP[0m: getting status of controller-manager Apr 16 04:23:54.631: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3148 get componentstatuses controller-manager' Apr 16 04:23:55.444: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" Apr 16 04:23:55.444: INFO: stdout: "NAME STATUS MESSAGE ERROR\ncontroller-manager Healthy ok \n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:55.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-3148" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl get componentstatuses [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:790[0m should get componentstatuses [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:791[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":4,"skipped":26,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:55.939: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 14 lines ... [36mDriver local doesn't support InlineVolume -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":7,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:21:58.862: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename services [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 60 lines ... Apr 16 04:23:21.316: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9070 exec execpod-affinity4h6s7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.70.111.217:80/' Apr 16 04:23:25.739: INFO: rc: 28 Apr 16 04:23:25.739: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9070 exec execpod-affinity4h6s7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.70.111.217:80/' Apr 16 04:23:30.059: INFO: rc: 28 Apr 16 04:23:30.059: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9070 exec execpod-affinity4h6s7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.70.111.217:80/' Apr 16 04:23:34.435: INFO: rc: 28 Apr 16 04:23:34.436: FAIL: Session is sticky after reaching the timeout Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc000c449a0, 0x79b7308, 0xc002a06160, 0xc002464280) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2873 +0xc96 k8s.io/kubernetes/test/e2e/network.glob..func24.23() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1819 +0x9c ... skipping 35 lines ... Apr 16 04:23:45.283: INFO: At 2022-04-16 04:22:08 +0000 UTC - event for affinity-clusterip-timeout-5jhrq: {kubelet ip-172-20-56-43.ap-south-1.compute.internal} Started: Started container affinity-clusterip-timeout Apr 16 04:23:45.283: INFO: At 2022-04-16 04:22:16 +0000 UTC - event for execpod-affinity4h6s7: {default-scheduler } Scheduled: Successfully assigned services-9070/execpod-affinity4h6s7 to ip-172-20-56-43.ap-south-1.compute.internal Apr 16 04:23:45.283: INFO: At 2022-04-16 04:22:17 +0000 UTC - event for execpod-affinity4h6s7: {kubelet ip-172-20-56-43.ap-south-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine Apr 16 04:23:45.283: INFO: At 2022-04-16 04:22:17 +0000 UTC - event for execpod-affinity4h6s7: {kubelet ip-172-20-56-43.ap-south-1.compute.internal} Created: Created container agnhost-container Apr 16 04:23:45.283: INFO: At 2022-04-16 04:22:17 +0000 UTC - event for execpod-affinity4h6s7: {kubelet ip-172-20-56-43.ap-south-1.compute.internal} Started: Started container agnhost-container Apr 16 04:23:45.283: INFO: At 2022-04-16 04:23:34 +0000 UTC - event for execpod-affinity4h6s7: {kubelet ip-172-20-56-43.ap-south-1.compute.internal} Killing: Stopping container agnhost-container Apr 16 04:23:45.283: INFO: At 2022-04-16 04:23:35 +0000 UTC - event for affinity-clusterip-timeout: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-9070/affinity-clusterip-timeout: Operation cannot be fulfilled on endpoints "affinity-clusterip-timeout": the object has been modified; please apply your changes to the latest version and try again Apr 16 04:23:45.283: INFO: At 2022-04-16 04:23:35 +0000 UTC - event for affinity-clusterip-timeout-5jhrq: {kubelet ip-172-20-56-43.ap-south-1.compute.internal} Killing: Stopping container affinity-clusterip-timeout Apr 16 04:23:45.283: INFO: At 2022-04-16 04:23:35 +0000 UTC - event for affinity-clusterip-timeout-8bw56: {kubelet ip-172-20-50-117.ap-south-1.compute.internal} Killing: Stopping container affinity-clusterip-timeout Apr 16 04:23:45.283: INFO: At 2022-04-16 04:23:35 +0000 UTC - event for affinity-clusterip-timeout-tmgl4: {kubelet ip-172-20-63-100.ap-south-1.compute.internal} Killing: Stopping container affinity-clusterip-timeout Apr 16 04:23:45.519: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 04:23:45.520: INFO: Apr 16 04:23:45.756: INFO: ... skipping 295 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [91mApr 16 04:23:34.436: Session is sticky after reaching the timeout[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2873 [90m------------------------------[0m {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":7,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 20 lines ... [32m• [SLOW TEST:16.079 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m evictions: enough pods, absolute => should allow an eviction [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":6,"skipped":39,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:58.582: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping ... skipping 107 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:23:59.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-6539" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":5,"skipped":10,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:23:59.665: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 36 lines ... [32m• [SLOW TEST:30.595 seconds][0m [sig-apps] CronJob [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should be able to schedule after more than 100 missed schedule [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":7,"skipped":44,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 47 lines ... Apr 16 04:23:11.180: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-v8skl] to have phase Bound Apr 16 04:23:11.414: INFO: PersistentVolumeClaim pvc-v8skl found and phase=Bound (234.47836ms) [1mSTEP[0m: Deleting the previously created pod Apr 16 04:23:24.589: INFO: Deleting pod "pvc-volume-tester-6dd48" in namespace "csi-mock-volumes-5521" Apr 16 04:23:24.825: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6dd48" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Apr 16 04:23:29.533: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/049fbef5-acfd-42f6-8112-996496cb6197/volumes/kubernetes.io~csi/pvc-af51836a-22d0-4294-98c3-f75eef56ca76/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-6dd48 Apr 16 04:23:29.533: INFO: Deleting pod "pvc-volume-tester-6dd48" in namespace "csi-mock-volumes-5521" [1mSTEP[0m: Deleting claim pvc-v8skl Apr 16 04:23:30.239: INFO: Waiting up to 2m0s for PersistentVolume pvc-af51836a-22d0-4294-98c3-f75eef56ca76 to get deleted Apr 16 04:23:30.473: INFO: PersistentVolume pvc-af51836a-22d0-4294-98c3-f75eef56ca76 found and phase=Released (234.339321ms) Apr 16 04:23:32.710: INFO: PersistentVolume pvc-af51836a-22d0-4294-98c3-f75eef56ca76 found and phase=Released (2.471352079s) ... skipping 46 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSIServiceAccountToken [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497[0m token should not be plumbed down when csiServiceAccountTokenEnabled=false [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":3,"skipped":45,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 6 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:01.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-1419" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:02.011: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 108 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":126,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:04.424: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 36 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:05.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-1088" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":8,"skipped":62,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:05.660: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 57 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:07.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-638" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [32m• [SLOW TEST:18.792 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should update/patch PodDisruptionBudget status [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:14.784: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 127 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Basic StatefulSet functionality [StatefulSetBasic] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97[0m should adopt matching orphans and release non-matching pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:167[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":6,"skipped":50,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 26 lines ... [32m• [SLOW TEST:18.568 seconds][0m [sig-storage] EmptyDir volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m pod should support memory backed volumes of specified size [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":4,"skipped":47,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:20.607: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 48 lines ... [32m• [SLOW TEST:35.329 seconds][0m [sig-network] DNS [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should support configurable pod resolv.conf [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":6,"skipped":67,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:21.184: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping ... skipping 275 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m Clean up pods on node [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279[0m kubelet should be able to delete 10 pods per node in 1m0s. [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":4,"skipped":18,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:21.931: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 37 lines ... [32m• [SLOW TEST:65.288 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a failing exec liveness probe that took longer than the timeout [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:258[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":10,"skipped":97,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:21.990: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 169 lines ... [1mSTEP[0m: creating an object not containing a namespace with in-cluster config Apr 16 04:21:51.072: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3722 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1' Apr 16 04:21:54.381: INFO: rc: 255 [1mSTEP[0m: trying to use kubectl with invalid token Apr 16 04:21:54.381: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3722 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1' Apr 16 04:21:56.924: INFO: rc: 255 Apr 16 04:21:56.924: INFO: got err error running /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3722 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1: Command stdout: I0416 04:21:56.621418 195 merged_client_builder.go:163] Using in-cluster namespace I0416 04:21:56.621844 195 merged_client_builder.go:121] Using in-cluster configuration I0416 04:21:56.625589 195 merged_client_builder.go:121] Using in-cluster configuration I0416 04:21:56.633636 195 merged_client_builder.go:121] Using in-cluster configuration I0416 04:21:56.634189 195 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-3722/pods?limit=500 ... skipping 8 lines ... "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }] F0416 04:21:56.640249 195 helpers.go:116] error: You must be logged in to the server (Unauthorized) goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc00032ae00, 0x68, 0x1af) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30d3380, 0xc000000003, 0x0, 0x0, 0xc0005fccb0, 0x2, 0x27f4698, 0xa, 0x74, 0x40e300) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30d3380, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc000396dd0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185 k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000478300, 0x3a, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x226b500, 0xc000302720, 0x20ec0f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:178 +0x8a3 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc000554280, 0xc0001d2870, 0x1, 0x3) ... skipping 66 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0x710 stderr: + /tmp/kubectl get pods '--token=invalid' '--v=7' command terminated with exit code 255 error: exit status 255 [1mSTEP[0m: trying to use kubectl with invalid server Apr 16 04:21:56.924: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3722 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1' Apr 16 04:24:09.357: INFO: rc: 255 Apr 16 04:24:09.357: INFO: got err error running /logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3722 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1: Command stdout: I0416 04:21:59.184609 207 merged_client_builder.go:163] Using in-cluster namespace I0416 04:22:24.207834 207 round_trippers.go:454] GET http://invalid/api?timeout=32s in 25022 milliseconds I0416 04:22:24.207958 207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host I0416 04:22:54.209391 207 round_trippers.go:454] GET http://invalid/api?timeout=32s in 30001 milliseconds I0416 04:22:54.209459 207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout I0416 04:22:54.209477 207 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout I0416 04:23:24.210423 207 round_trippers.go:454] GET http://invalid/api?timeout=32s in 30000 milliseconds I0416 04:23:24.210489 207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout I0416 04:23:49.213833 207 round_trippers.go:454] GET http://invalid/api?timeout=32s in 25003 milliseconds I0416 04:23:49.213897 207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.3.16:43583->100.64.0.10:53: i/o timeout I0416 04:24:09.220497 207 round_trippers.go:454] GET http://invalid/api?timeout=32s in 20006 milliseconds I0416 04:24:09.220569 207 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.3.16:39083->100.64.0.10:53: i/o timeout I0416 04:24:09.220615 207 helpers.go:235] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.3.16:39083->100.64.0.10:53: i/o timeout F0416 04:24:09.220631 207 helpers.go:116] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: read udp 100.96.3.16:39083->100.64.0.10:53: i/o timeout goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0000a4000, 0xb3, 0x1b8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30d3380, 0xc000000003, 0x0, 0x0, 0xc00043d030, 0x2, 0x27f4698, 0xa, 0x74, 0x40e300) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30d3380, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0004de220, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185 k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0003aeab0, 0x84, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x226a840, 0xc0004d63c0, 0x20ec0f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0003da000, 0xc000354210, 0x1, 0x3) ... skipping 30 lines ... /usr/local/go/src/net/http/client.go:396 +0x337 stderr: + /tmp/kubectl get pods '--server=invalid' '--v=6' command terminated with exit code 255 error: exit status 255 [1mSTEP[0m: trying to use kubectl with invalid namespace Apr 16 04:24:09.358: INFO: Running '/logs/artifacts/173f864a-bd3b-11ec-a313-ea2de6b4f6d8/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3722 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1' Apr 16 04:24:11.891: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n" Apr 16 04:24:11.891: INFO: stdout: "I0416 04:24:11.741390 221 merged_client_builder.go:121] Using in-cluster configuration\nI0416 04:24:11.743845 221 merged_client_builder.go:121] Using in-cluster configuration\nI0416 04:24:11.747082 221 merged_client_builder.go:121] Using in-cluster configuration\nI0416 04:24:11.754840 221 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 7 milliseconds\nNo resources found in invalid namespace.\n" Apr 16 04:24:11.891: INFO: stdout: I0416 04:24:11.741390 221 merged_client_builder.go:121] Using in-cluster configuration ... skipping 63 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Simple pod [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379[0m should handle in-cluster config [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:646[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":2,"skipped":1,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":7,"skipped":65,"failed":0} [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:23:18.490: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename csi-mock-volumes [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 44 lines ... Apr 16 04:23:32.379: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rn7pm] to have phase Bound Apr 16 04:23:32.616: INFO: PersistentVolumeClaim pvc-rn7pm found and phase=Bound (236.724348ms) [1mSTEP[0m: Deleting the previously created pod Apr 16 04:23:49.809: INFO: Deleting pod "pvc-volume-tester-q86kb" in namespace "csi-mock-volumes-9604" Apr 16 04:23:50.047: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q86kb" to be fully deleted [1mSTEP[0m: Checking CSI driver logs Apr 16 04:23:52.769: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2d84fd1d-687e-4c89-a009-8cdf63a66747/volumes/kubernetes.io~csi/pvc-c18e8552-11fb-40c0-af34-b7dc35bc4e1a/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} [1mSTEP[0m: Deleting pod pvc-volume-tester-q86kb Apr 16 04:23:52.770: INFO: Deleting pod "pvc-volume-tester-q86kb" in namespace "csi-mock-volumes-9604" [1mSTEP[0m: Deleting claim pvc-rn7pm Apr 16 04:23:53.485: INFO: Waiting up to 2m0s for PersistentVolume pvc-c18e8552-11fb-40c0-af34-b7dc35bc4e1a to get deleted Apr 16 04:23:53.722: INFO: PersistentVolume pvc-c18e8552-11fb-40c0-af34-b7dc35bc4e1a found and phase=Released (236.385186ms) Apr 16 04:23:55.962: INFO: PersistentVolume pvc-c18e8552-11fb-40c0-af34-b7dc35bc4e1a found and phase=Released (2.476368888s) ... skipping 48 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI workload information using mock driver [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444[0m should not be passed when podInfoOnMount=nil [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":8,"skipped":65,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:23.865: INFO: Only supported for providers [gce gke] (not aws) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 199 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:24:22.734: INFO: Waiting up to 5m0s for pod "metadata-volume-9249cbcf-62d3-4531-b29e-debac7dd5d8c" in namespace "projected-7384" to be "Succeeded or Failed" Apr 16 04:24:22.969: INFO: Pod "metadata-volume-9249cbcf-62d3-4531-b29e-debac7dd5d8c": Phase="Pending", Reason="", readiness=false. Elapsed: 234.904116ms Apr 16 04:24:25.205: INFO: Pod "metadata-volume-9249cbcf-62d3-4531-b29e-debac7dd5d8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.470424447s [1mSTEP[0m: Saw pod success Apr 16 04:24:25.205: INFO: Pod "metadata-volume-9249cbcf-62d3-4531-b29e-debac7dd5d8c" satisfied condition "Succeeded or Failed" Apr 16 04:24:25.441: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod metadata-volume-9249cbcf-62d3-4531-b29e-debac7dd5d8c container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:25.931: INFO: Waiting for pod metadata-volume-9249cbcf-62d3-4531-b29e-debac7dd5d8c to disappear Apr 16 04:24:26.166: INFO: Pod metadata-volume-9249cbcf-62d3-4531-b29e-debac7dd5d8c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.332 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":91,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:26.700: INFO: Only supported for providers [gce gke] (not aws) ... skipping 83 lines ... [32m• [SLOW TEST:8.537 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should not be blocked by dependency circle [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:29.197: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 51 lines ... [32m• [SLOW TEST:13.748 seconds][0m [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to convert a non homogeneous list of CRs [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:24:26.768: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 16 04:24:28.184: INFO: Waiting up to 5m0s for pod "security-context-19d5b1f5-19bc-4847-ba65-606980f60d9c" in namespace "security-context-470" to be "Succeeded or Failed" Apr 16 04:24:28.419: INFO: Pod "security-context-19d5b1f5-19bc-4847-ba65-606980f60d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 235.007654ms Apr 16 04:24:30.655: INFO: Pod "security-context-19d5b1f5-19bc-4847-ba65-606980f60d9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.470974365s [1mSTEP[0m: Saw pod success Apr 16 04:24:30.656: INFO: Pod "security-context-19d5b1f5-19bc-4847-ba65-606980f60d9c" satisfied condition "Succeeded or Failed" Apr 16 04:24:30.891: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod security-context-19d5b1f5-19bc-4847-ba65-606980f60d9c container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:31.369: INFO: Waiting for pod security-context-19d5b1f5-19bc-4847-ba65-606980f60d9c to disappear Apr 16 04:24:31.606: INFO: Pod security-context-19d5b1f5-19bc-4847-ba65-606980f60d9c no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 19 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:31.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-4264" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]","total":-1,"completed":8,"skipped":55,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 75 lines ... [32m• [SLOW TEST:251.709 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should *not* be restarted with a non-local redirect http liveness probe [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:295[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":2,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0} [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:22:53.576: INFO: >>> kubeConfig: /root/.kube/config ... skipping 258 lines ... Apr 16 04:22:50.796: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-26 Apr 16 04:22:51.032: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-26 Apr 16 04:22:51.267: INFO: creating *v1.StatefulSet: csi-mock-volumes-26-8843/csi-mockplugin Apr 16 04:22:51.504: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-26 Apr 16 04:22:51.739: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-26" Apr 16 04:22:51.975: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-26 to register on node ip-172-20-63-100.ap-south-1.compute.internal I0416 04:23:01.266299 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0416 04:23:01.503585 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-26","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0416 04:23:01.741175 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I0416 04:23:01.979110 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0416 04:23:02.491158 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-26","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0416 04:23:03.680762 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-26","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} [1mSTEP[0m: Creating pod Apr 16 04:23:09.872: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0416 04:23:10.359235 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0416 04:23:12.776730 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I0416 04:23:13.941401 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0416 04:23:14.182177 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Apr 16 04:23:14.420: INFO: >>> kubeConfig: /root/.kube/config I0416 04:23:15.930289 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599","storage.kubernetes.io/csiProvisionerIdentity":"1650082982089-8081-csi-mock-csi-mock-volumes-26"}},"Response":{},"Error":"","FullError":null} I0416 04:23:16.169872 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0416 04:23:16.407619 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Apr 16 04:23:16.647: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:23:18.131: INFO: >>> kubeConfig: /root/.kube/config Apr 16 04:23:19.644: INFO: >>> kubeConfig: /root/.kube/config I0416 04:23:21.126605 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599/globalmount","target_path":"/var/lib/kubelet/pods/ae745149-daee-4402-ae19-faf59cbbeebe/volumes/kubernetes.io~csi/pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599","storage.kubernetes.io/csiProvisionerIdentity":"1650082982089-8081-csi-mock-csi-mock-volumes-26"}},"Response":{},"Error":"","FullError":null} Apr 16 04:23:24.817: INFO: Deleting pod "pvc-volume-tester-q7vxj" in namespace "csi-mock-volumes-26" Apr 16 04:23:25.053: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q7vxj" to be fully deleted Apr 16 04:23:27.077: INFO: >>> kubeConfig: /root/.kube/config I0416 04:23:28.622783 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ae745149-daee-4402-ae19-faf59cbbeebe/volumes/kubernetes.io~csi/pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599/mount"},"Response":{},"Error":"","FullError":null} I0416 04:23:28.897636 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0416 04:23:29.136943 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89419e3e-85cf-4981-92e1-ac74d1ee8599/globalmount"},"Response":{},"Error":"","FullError":null} I0416 04:23:29.781618 6655 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} [1mSTEP[0m: Checking PVC events Apr 16 04:23:30.762: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nqdsl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-26", SelfLink:"", UID:"89419e3e-85cf-4981-92e1-ac74d1ee8599", ResourceVersion:"8802", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785679789, loc:(*time.Location)(0xa0acfa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d273c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d273e0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0029333f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002933400), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Apr 16 04:23:30.762: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nqdsl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-26", SelfLink:"", UID:"89419e3e-85cf-4981-92e1-ac74d1ee8599", ResourceVersion:"8814", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785679789, loc:(*time.Location)(0xa0acfa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-63-100.ap-south-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d27758), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d27770), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d27788), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d277a0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002933700), VolumeMode:(*v1.PersistentVolumeMode)(0xc002933710), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Apr 16 04:23:30.762: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nqdsl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-26", SelfLink:"", UID:"89419e3e-85cf-4981-92e1-ac74d1ee8599", ResourceVersion:"8815", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785679789, loc:(*time.Location)(0xa0acfa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-26", "volume.kubernetes.io/selected-node":"ip-172-20-63-100.ap-south-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003660270), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003660288), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036602a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036602b8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036602d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036602e8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0033a7f10), VolumeMode:(*v1.PersistentVolumeMode)(0xc0033a7f50), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Apr 16 04:23:30.762: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nqdsl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-26", SelfLink:"", UID:"89419e3e-85cf-4981-92e1-ac74d1ee8599", ResourceVersion:"8826", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785679789, loc:(*time.Location)(0xa0acfa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-26"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039933b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039933c8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039933e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039933f8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003993410), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003993428), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003229330), VolumeMode:(*v1.PersistentVolumeMode)(0xc003229340), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Apr 16 04:23:30.762: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nqdsl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-26", SelfLink:"", UID:"89419e3e-85cf-4981-92e1-ac74d1ee8599", ResourceVersion:"8915", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785679789, loc:(*time.Location)(0xa0acfa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-26", "volume.kubernetes.io/selected-node":"ip-172-20-63-100.ap-south-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003993458), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003993470), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003993488), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039934a0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039934b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039934d0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003229370), VolumeMode:(*v1.PersistentVolumeMode)(0xc003229380), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} ... skipping 51 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m storage capacity [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023[0m exhausted, late binding, with topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":9,"skipped":85,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes ... skipping 7 lines ... Apr 16 04:24:05.610: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Apr 16 04:24:06.857: INFO: Successfully created a new PD: "aws://ap-south-1a/vol-01c13ea78f20c8d61". Apr 16 04:24:06.857: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-nlnl [1mSTEP[0m: Creating a pod to test exec-volume-test Apr 16 04:24:07.095: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-nlnl" in namespace "volume-3748" to be "Succeeded or Failed" Apr 16 04:24:07.330: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Pending", Reason="", readiness=false. Elapsed: 234.506919ms Apr 16 04:24:09.565: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470241277s Apr 16 04:24:11.800: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.705368616s Apr 16 04:24:14.035: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.939734257s Apr 16 04:24:16.275: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180161038s Apr 16 04:24:18.511: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.415607749s Apr 16 04:24:20.745: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.650409822s Apr 16 04:24:22.981: INFO: Pod "exec-volume-test-inlinevolume-nlnl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.886029418s [1mSTEP[0m: Saw pod success Apr 16 04:24:22.981: INFO: Pod "exec-volume-test-inlinevolume-nlnl" satisfied condition "Succeeded or Failed" Apr 16 04:24:23.216: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod exec-volume-test-inlinevolume-nlnl container exec-container-inlinevolume-nlnl: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:23.694: INFO: Waiting for pod exec-volume-test-inlinevolume-nlnl to disappear Apr 16 04:24:23.928: INFO: Pod exec-volume-test-inlinevolume-nlnl no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-nlnl Apr 16 04:24:23.928: INFO: Deleting pod "exec-volume-test-inlinevolume-nlnl" in namespace "volume-3748" Apr 16 04:24:24.470: INFO: Couldn't delete PD "aws://ap-south-1a/vol-01c13ea78f20c8d61", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01c13ea78f20c8d61 is currently attached to i-0af590e4d35c338de status code: 400, request id: 2beeada7-df3b-4cc9-adf3-18411896fca3 Apr 16 04:24:30.522: INFO: Couldn't delete PD "aws://ap-south-1a/vol-01c13ea78f20c8d61", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01c13ea78f20c8d61 is currently attached to i-0af590e4d35c338de status code: 400, request id: d860c241-acd1-4592-873e-674147ac2958 Apr 16 04:24:36.626: INFO: Successfully deleted PD "aws://ap-south-1a/vol-01c13ea78f20c8d61". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:36.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-3748" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":127,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:37.122: INFO: Only supported for providers [azure] (not aws) ... skipping 33 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:37.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-6235" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":12,"skipped":77,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 36 lines ... [32m• [SLOW TEST:40.291 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to change the type from NodePort to ExternalName [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode ... skipping 38 lines ... [1mSTEP[0m: Deleting pod hostexec-ip-172-20-56-43.ap-south-1.compute.internal-7fpmm in namespace volumemode-3952 Apr 16 04:24:36.059: INFO: Deleting pod "pod-df83a1c6-6218-4ca3-bb9e-530199a4ac52" in namespace "volumemode-3952" Apr 16 04:24:36.298: INFO: Wait up to 5m0s for pod "pod-df83a1c6-6218-4ca3-bb9e-530199a4ac52" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Apr 16 04:24:40.773: INFO: Deleting PersistentVolumeClaim "pvc-7j7lm" Apr 16 04:24:41.011: INFO: Deleting PersistentVolume "aws-stnsq" Apr 16 04:24:41.578: INFO: Couldn't delete PD "aws://ap-south-1a/vol-03bc1ac7a6ad229d5", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03bc1ac7a6ad229d5 is currently attached to i-096937a720dee4796 status code: 400, request id: 4d882470-f037-4e91-9b9f-99416b1bb6b3 Apr 16 04:24:47.721: INFO: Successfully deleted PD "aws://ap-south-1a/vol-03bc1ac7a6ad229d5". [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:47.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-3952" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":73,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:48.210: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 127 lines ... Apr 16 04:23:51.870: INFO: PersistentVolumeClaim csi-hostpath6jg6k found but phase is Pending instead of Bound. Apr 16 04:23:54.107: INFO: PersistentVolumeClaim csi-hostpath6jg6k found but phase is Pending instead of Bound. Apr 16 04:23:56.345: INFO: PersistentVolumeClaim csi-hostpath6jg6k found but phase is Pending instead of Bound. Apr 16 04:23:58.583: INFO: PersistentVolumeClaim csi-hostpath6jg6k found and phase=Bound (18.139924269s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-mfhw [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:23:59.301: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mfhw" in namespace "provisioning-8916" to be "Succeeded or Failed" Apr 16 04:23:59.538: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 237.397279ms Apr 16 04:24:01.778: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47742597s Apr 16 04:24:04.017: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.716100649s Apr 16 04:24:06.255: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953969808s Apr 16 04:24:08.493: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.192031285s Apr 16 04:24:10.732: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.431233128s [1mSTEP[0m: Saw pod success Apr 16 04:24:10.732: INFO: Pod "pod-subpath-test-dynamicpv-mfhw" satisfied condition "Succeeded or Failed" Apr 16 04:24:10.969: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-mfhw container test-container-subpath-dynamicpv-mfhw: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:11.559: INFO: Waiting for pod pod-subpath-test-dynamicpv-mfhw to disappear Apr 16 04:24:11.796: INFO: Pod pod-subpath-test-dynamicpv-mfhw no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-mfhw Apr 16 04:24:11.796: INFO: Deleting pod "pod-subpath-test-dynamicpv-mfhw" in namespace "provisioning-8916" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-mfhw [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:24:12.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mfhw" in namespace "provisioning-8916" to be "Succeeded or Failed" Apr 16 04:24:12.510: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 237.54061ms Apr 16 04:24:14.749: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476408978s Apr 16 04:24:16.988: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715026418s Apr 16 04:24:19.226: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953248431s Apr 16 04:24:21.464: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.19191926s Apr 16 04:24:23.703: INFO: Pod "pod-subpath-test-dynamicpv-mfhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.43047527s [1mSTEP[0m: Saw pod success Apr 16 04:24:23.703: INFO: Pod "pod-subpath-test-dynamicpv-mfhw" satisfied condition "Succeeded or Failed" Apr 16 04:24:23.940: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-mfhw container test-container-subpath-dynamicpv-mfhw: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:24.421: INFO: Waiting for pod pod-subpath-test-dynamicpv-mfhw to disappear Apr 16 04:24:24.658: INFO: Pod pod-subpath-test-dynamicpv-mfhw no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-mfhw Apr 16 04:24:24.658: INFO: Deleting pod "pod-subpath-test-dynamicpv-mfhw" in namespace "provisioning-8916" ... skipping 60 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":35,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:48.517: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 11 lines ... [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":5,"skipped":36,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:24:34.515: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:24:36.063: INFO: Waiting up to 5m0s for pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3" in namespace "projected-3312" to be "Succeeded or Failed" Apr 16 04:24:36.299: INFO: Pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 235.880149ms Apr 16 04:24:38.535: INFO: Pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471511146s Apr 16 04:24:40.770: INFO: Pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707183745s Apr 16 04:24:43.008: INFO: Pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.944557769s Apr 16 04:24:45.244: INFO: Pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180777612s Apr 16 04:24:47.480: INFO: Pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.417424484s [1mSTEP[0m: Saw pod success Apr 16 04:24:47.481: INFO: Pod "metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3" satisfied condition "Succeeded or Failed" Apr 16 04:24:47.716: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:48.192: INFO: Waiting for pod metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3 to disappear Apr 16 04:24:48.428: INFO: Pod metadata-volume-6c1d4b04-b95c-49bf-b798-e738d149d8d3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.386 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":36,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:48.949: INFO: Only supported for providers [vsphere] (not aws) ... skipping 100 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":8,"skipped":114,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:24:32.090: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename kubectl [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 29 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23[0m Kubectl client-side validation [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992[0m should create/apply a CR with unknown fields for CRD with no validation schema [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:993[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":9,"skipped":114,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:50.922: INFO: Only supported for providers [gce gke] (not aws) ... skipping 5 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: gcepd] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m ... skipping 162 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m Pod Container Status [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:199[0m should never report success for a pending container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":9,"skipped":28,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:51.350: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 62 lines ... Apr 16 04:24:32.680: INFO: PersistentVolumeClaim pvc-lthzv found but phase is Pending instead of Bound. Apr 16 04:24:34.932: INFO: PersistentVolumeClaim pvc-lthzv found and phase=Bound (4.726357855s) Apr 16 04:24:34.932: INFO: Waiting up to 3m0s for PersistentVolume local-92m4t to have phase Bound Apr 16 04:24:35.169: INFO: PersistentVolume local-92m4t found and phase=Bound (237.076678ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-j7c6 [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:24:35.886: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j7c6" in namespace "provisioning-7128" to be "Succeeded or Failed" Apr 16 04:24:36.123: INFO: Pod "pod-subpath-test-preprovisionedpv-j7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 237.309051ms Apr 16 04:24:38.362: INFO: Pod "pod-subpath-test-preprovisionedpv-j7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476211209s Apr 16 04:24:40.603: INFO: Pod "pod-subpath-test-preprovisionedpv-j7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.716611657s Apr 16 04:24:42.842: INFO: Pod "pod-subpath-test-preprovisionedpv-j7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.955553493s Apr 16 04:24:45.080: INFO: Pod "pod-subpath-test-preprovisionedpv-j7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.193837444s Apr 16 04:24:47.319: INFO: Pod "pod-subpath-test-preprovisionedpv-j7c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.43303201s [1mSTEP[0m: Saw pod success Apr 16 04:24:47.319: INFO: Pod "pod-subpath-test-preprovisionedpv-j7c6" satisfied condition "Succeeded or Failed" Apr 16 04:24:47.556: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-j7c6 container test-container-volume-preprovisionedpv-j7c6: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:48.038: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j7c6 to disappear Apr 16 04:24:48.275: INFO: Pod pod-subpath-test-preprovisionedpv-j7c6 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-j7c6 Apr 16 04:24:48.276: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j7c6" in namespace "provisioning-7128" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":108,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 13 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:24:53.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-2935" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":12,"skipped":110,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:54.279: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 128 lines ... [32m• [SLOW TEST:6.185 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should delete RS created by deployment when not orphaning [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":10,"skipped":78,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 2 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:24:37.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62" in namespace "downward-api-4338" to be "Succeeded or Failed" Apr 16 04:24:37.998: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Pending", Reason="", readiness=false. Elapsed: 235.148346ms Apr 16 04:24:40.233: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470542059s Apr 16 04:24:42.469: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.705999478s Apr 16 04:24:44.705: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.941943169s Apr 16 04:24:46.940: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Pending", Reason="", readiness=false. Elapsed: 9.177295747s Apr 16 04:24:49.176: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Pending", Reason="", readiness=false. Elapsed: 11.413392783s Apr 16 04:24:51.418: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Pending", Reason="", readiness=false. Elapsed: 13.65540549s Apr 16 04:24:53.670: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.907006976s [1mSTEP[0m: Saw pod success Apr 16 04:24:53.670: INFO: Pod "downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62" satisfied condition "Succeeded or Failed" Apr 16 04:24:53.907: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:54.411: INFO: Waiting for pod downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62 to disappear Apr 16 04:24:54.646: INFO: Pod downwardapi-volume-d4261469-6c8d-4277-8d18-8eda6452da62 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:18.772 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":91,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:55.129: INFO: Only supported for providers [vsphere] (not aws) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 61 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m When pod refers to non-existent ephemeral storage [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53[0m should allow deletion of pod with invalid volume : configmap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":4,"skipped":15,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:55.609: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 59 lines ... [32m• [SLOW TEST:23.441 seconds][0m [sig-apps] DisruptionController [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m should block an eviction until the PDB is updated to allow it [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:24:56.220: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 38 lines ... Apr 16 04:24:06.886: INFO: PersistentVolumeClaim pvc-j5knc found and phase=Bound (236.992477ms) Apr 16 04:24:06.886: INFO: Waiting up to 3m0s for PersistentVolume nfs-4wjvh to have phase Bound Apr 16 04:24:07.123: INFO: PersistentVolume nfs-4wjvh found and phase=Bound (236.985686ms) [It] should test that a PV becomes Available and is clean after the PVC is deleted. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283 [1mSTEP[0m: Writing to the volume. Apr 16 04:24:07.843: INFO: Waiting up to 5m0s for pod "pvc-tester-czcz2" in namespace "pv-9461" to be "Succeeded or Failed" Apr 16 04:24:08.081: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 237.977144ms Apr 16 04:24:10.319: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476637011s Apr 16 04:24:12.558: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715030718s Apr 16 04:24:14.796: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953459355s Apr 16 04:24:17.034: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.191558145s Apr 16 04:24:19.272: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.429543525s Apr 16 04:24:21.510: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.667489644s Apr 16 04:24:23.749: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.906136707s Apr 16 04:24:25.986: INFO: Pod "pvc-tester-czcz2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.143563259s Apr 16 04:24:28.224: INFO: Pod "pvc-tester-czcz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.381013173s [1mSTEP[0m: Saw pod success Apr 16 04:24:28.224: INFO: Pod "pvc-tester-czcz2" satisfied condition "Succeeded or Failed" [1mSTEP[0m: Deleting the claim Apr 16 04:24:28.224: INFO: Deleting pod "pvc-tester-czcz2" in namespace "pv-9461" Apr 16 04:24:28.469: INFO: Wait up to 5m0s for pod "pvc-tester-czcz2" to be fully deleted Apr 16 04:24:28.711: INFO: Deleting PVC pvc-j5knc to trigger reclamation of PV Apr 16 04:24:28.711: INFO: Deleting PersistentVolumeClaim "pvc-j5knc" Apr 16 04:24:28.949: INFO: Waiting for reclaim process to complete. ... skipping 5 lines ... Apr 16 04:24:38.141: INFO: PersistentVolume nfs-4wjvh found and phase=Available (9.191834939s) Apr 16 04:24:38.378: INFO: PV nfs-4wjvh now in "Available" phase [1mSTEP[0m: Re-mounting the volume. Apr 16 04:24:38.617: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-nllfn] to have phase Bound Apr 16 04:24:38.855: INFO: PersistentVolumeClaim pvc-nllfn found and phase=Bound (237.26966ms) [1mSTEP[0m: Verifying the mount has been cleaned. Apr 16 04:24:39.093: INFO: Waiting up to 5m0s for pod "pvc-tester-snjtp" in namespace "pv-9461" to be "Succeeded or Failed" Apr 16 04:24:39.330: INFO: Pod "pvc-tester-snjtp": Phase="Pending", Reason="", readiness=false. Elapsed: 237.346845ms Apr 16 04:24:41.569: INFO: Pod "pvc-tester-snjtp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475562368s Apr 16 04:24:43.806: INFO: Pod "pvc-tester-snjtp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713232569s Apr 16 04:24:46.044: INFO: Pod "pvc-tester-snjtp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.951340328s [1mSTEP[0m: Saw pod success Apr 16 04:24:46.044: INFO: Pod "pvc-tester-snjtp" satisfied condition "Succeeded or Failed" Apr 16 04:24:46.044: INFO: Deleting pod "pvc-tester-snjtp" in namespace "pv-9461" Apr 16 04:24:46.285: INFO: Wait up to 5m0s for pod "pvc-tester-snjtp" to be fully deleted Apr 16 04:24:46.523: INFO: Pod exited without failure; the volume has been recycled. Apr 16 04:24:46.523: INFO: Removing second PVC, waiting for the recycler to finish before cleanup. Apr 16 04:24:46.523: INFO: Deleting PVC pvc-nllfn to trigger reclamation of PV Apr 16 04:24:46.523: INFO: Deleting PersistentVolumeClaim "pvc-nllfn" ... skipping 27 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122[0m when invoking the Recycle reclaim policy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265[0m should test that a PV becomes Available and is clean after the PVC is deleted. [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":6,"skipped":31,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:01.640: INFO: Driver hostPath doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 49 lines ... [32m• [SLOW TEST:31.577 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for CRD preserving unknown fields in an embedded object [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:03.738: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 120 lines ... [1mSTEP[0m: Deleting pod hostexec-ip-172-20-63-100.ap-south-1.compute.internal-v7p8p in namespace volumemode-6389 Apr 16 04:24:54.107: INFO: Deleting pod "pod-8335ddfc-d56b-4a16-816d-5ead14d0d25b" in namespace "volumemode-6389" Apr 16 04:24:54.346: INFO: Wait up to 5m0s for pod "pod-8335ddfc-d56b-4a16-816d-5ead14d0d25b" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Apr 16 04:24:56.816: INFO: Deleting PersistentVolumeClaim "pvc-q5rb9" Apr 16 04:24:57.051: INFO: Deleting PersistentVolume "aws-pcgmq" Apr 16 04:24:57.597: INFO: Couldn't delete PD "aws://ap-south-1a/vol-0a6d94df3d7275247", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a6d94df3d7275247 is currently attached to i-0da5f478e6a9b5aaf status code: 400, request id: 915fa83c-e4d8-4b78-9983-00486456b5b1 Apr 16 04:25:03.725: INFO: Successfully deleted PD "aws://ap-south-1a/vol-0a6d94df3d7275247". [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:03.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-6389" for this suite. ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath ... skipping 5 lines ... [It] should support non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194 Apr 16 04:24:57.426: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Apr 16 04:24:57.427: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-mgwd [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:24:57.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mgwd" in namespace "provisioning-2241" to be "Succeeded or Failed" Apr 16 04:24:57.903: INFO: Pod "pod-subpath-test-inlinevolume-mgwd": Phase="Pending", Reason="", readiness=false. Elapsed: 237.249431ms Apr 16 04:25:00.142: INFO: Pod "pod-subpath-test-inlinevolume-mgwd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476043503s Apr 16 04:25:02.382: INFO: Pod "pod-subpath-test-inlinevolume-mgwd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.715508799s [1mSTEP[0m: Saw pod success Apr 16 04:25:02.382: INFO: Pod "pod-subpath-test-inlinevolume-mgwd" satisfied condition "Succeeded or Failed" Apr 16 04:25:02.619: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-mgwd container test-container-volume-inlinevolume-mgwd: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:03.109: INFO: Waiting for pod pod-subpath-test-inlinevolume-mgwd to disappear Apr 16 04:25:03.350: INFO: Pod pod-subpath-test-inlinevolume-mgwd no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-mgwd Apr 16 04:25:03.350: INFO: Deleting pod "pod-subpath-test-inlinevolume-mgwd" in namespace "provisioning-2241" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":42,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 33 lines ... [32m• [SLOW TEST:13.436 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should include webhook resources in discovery documents [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":10,"skipped":127,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:24.481 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":51,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:06.351: INFO: Only supported for providers [openstack] (not aws) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 48 lines ... Apr 16 04:24:24.827: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-4258vtgm9 [1mSTEP[0m: creating a claim Apr 16 04:24:25.079: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-5lnl [1mSTEP[0m: Creating a pod to test exec-volume-test Apr 16 04:24:25.800: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-5lnl" in namespace "volume-4258" to be "Succeeded or Failed" Apr 16 04:24:26.039: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 239.293744ms Apr 16 04:24:28.278: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478687422s Apr 16 04:24:30.517: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.717796171s Apr 16 04:24:32.758: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.95864677s Apr 16 04:24:35.000: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200054906s Apr 16 04:24:37.245: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.445857042s ... skipping 2 lines ... Apr 16 04:24:43.970: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 18.170861661s Apr 16 04:24:46.210: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 20.410436924s Apr 16 04:24:48.451: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 22.651033141s Apr 16 04:24:50.691: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Pending", Reason="", readiness=false. Elapsed: 24.891682362s Apr 16 04:24:52.933: INFO: Pod "exec-volume-test-dynamicpv-5lnl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.132894185s [1mSTEP[0m: Saw pod success Apr 16 04:24:52.933: INFO: Pod "exec-volume-test-dynamicpv-5lnl" satisfied condition "Succeeded or Failed" Apr 16 04:24:53.172: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod exec-volume-test-dynamicpv-5lnl container exec-container-dynamicpv-5lnl: <nil> [1mSTEP[0m: delete the pod Apr 16 04:24:53.693: INFO: Waiting for pod exec-volume-test-dynamicpv-5lnl to disappear Apr 16 04:24:53.938: INFO: Pod exec-volume-test-dynamicpv-5lnl no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-5lnl Apr 16 04:24:53.938: INFO: Deleting pod "exec-volume-test-dynamicpv-5lnl" in namespace "volume-4258" ... skipping 113 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":12,"skipped":66,"failed":0} [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:07.487: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 20 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:25:05.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56506056-dcd7-475b-9fcf-8ff34ce74be0" in namespace "projected-6177" to be "Succeeded or Failed" Apr 16 04:25:05.890: INFO: Pod "downwardapi-volume-56506056-dcd7-475b-9fcf-8ff34ce74be0": Phase="Pending", Reason="", readiness=false. Elapsed: 234.801027ms Apr 16 04:25:08.126: INFO: Pod "downwardapi-volume-56506056-dcd7-475b-9fcf-8ff34ce74be0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.470528264s [1mSTEP[0m: Saw pod success Apr 16 04:25:08.126: INFO: Pod "downwardapi-volume-56506056-dcd7-475b-9fcf-8ff34ce74be0" satisfied condition "Succeeded or Failed" Apr 16 04:25:08.361: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod downwardapi-volume-56506056-dcd7-475b-9fcf-8ff34ce74be0 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:08.840: INFO: Waiting for pod downwardapi-volume-56506056-dcd7-475b-9fcf-8ff34ce74be0 to disappear Apr 16 04:25:09.074: INFO: Pod downwardapi-volume-56506056-dcd7-475b-9fcf-8ff34ce74be0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 13 lines ... Apr 16 04:25:04.445: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Apr 16 04:25:05.867: INFO: Waiting up to 5m0s for pod "downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62" in namespace "downward-api-3406" to be "Succeeded or Failed" Apr 16 04:25:06.102: INFO: Pod "downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62": Phase="Pending", Reason="", readiness=false. Elapsed: 234.766587ms Apr 16 04:25:08.339: INFO: Pod "downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471688424s Apr 16 04:25:10.575: INFO: Pod "downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.707003832s [1mSTEP[0m: Saw pod success Apr 16 04:25:10.575: INFO: Pod "downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62" satisfied condition "Succeeded or Failed" Apr 16 04:25:10.817: INFO: Trying to get logs from node ip-172-20-63-100.ap-south-1.compute.internal pod downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62 container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:11.296: INFO: Waiting for pod downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62 to disappear Apr 16 04:25:11.532: INFO: Pod downward-api-96a4284f-bc27-45c0-b89b-76163e0beb62 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.559 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":131,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [32m• [SLOW TEST:18.015 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should be able to deny attaching pod [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":11,"skipped":98,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":20,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:25:06.378: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Apr 16 04:25:07.820: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-9bbc3a8b-decb-4f58-afc3-267a53f8141f" in namespace "security-context-test-5232" to be "Succeeded or Failed" Apr 16 04:25:08.059: INFO: Pod "busybox-privileged-true-9bbc3a8b-decb-4f58-afc3-267a53f8141f": Phase="Pending", Reason="", readiness=false. Elapsed: 239.040904ms Apr 16 04:25:10.307: INFO: Pod "busybox-privileged-true-9bbc3a8b-decb-4f58-afc3-267a53f8141f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486850581s Apr 16 04:25:12.553: INFO: Pod "busybox-privileged-true-9bbc3a8b-decb-4f58-afc3-267a53f8141f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.732654244s Apr 16 04:25:12.553: INFO: Pod "busybox-privileged-true-9bbc3a8b-decb-4f58-afc3-267a53f8141f" satisfied condition "Succeeded or Failed" Apr 16 04:25:12.799: INFO: Got logs for pod "busybox-privileged-true-9bbc3a8b-decb-4f58-afc3-267a53f8141f": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:12.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-5232" for this suite. ... skipping 3 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m When creating a pod with privileged [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232[0m should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":6,"skipped":20,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath ... skipping 129 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":2,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 17 lines ... Apr 16 04:25:02.979: INFO: PersistentVolumeClaim pvc-l9s72 found but phase is Pending instead of Bound. Apr 16 04:25:05.220: INFO: PersistentVolumeClaim pvc-l9s72 found and phase=Bound (4.717150365s) Apr 16 04:25:05.220: INFO: Waiting up to 3m0s for PersistentVolume local-zzqq2 to have phase Bound Apr 16 04:25:05.458: INFO: PersistentVolume local-zzqq2 found and phase=Bound (237.280743ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-vnnm [1mSTEP[0m: Creating a pod to test exec-volume-test Apr 16 04:25:06.171: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-vnnm" in namespace "volume-7347" to be "Succeeded or Failed" Apr 16 04:25:06.411: INFO: Pod "exec-volume-test-preprovisionedpv-vnnm": Phase="Pending", Reason="", readiness=false. Elapsed: 239.583499ms Apr 16 04:25:08.649: INFO: Pod "exec-volume-test-preprovisionedpv-vnnm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.477753741s [1mSTEP[0m: Saw pod success Apr 16 04:25:08.649: INFO: Pod "exec-volume-test-preprovisionedpv-vnnm" satisfied condition "Succeeded or Failed" Apr 16 04:25:08.886: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-vnnm container exec-container-preprovisionedpv-vnnm: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:09.372: INFO: Waiting for pod exec-volume-test-preprovisionedpv-vnnm to disappear Apr 16 04:25:09.609: INFO: Pod exec-volume-test-preprovisionedpv-vnnm no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-vnnm Apr 16 04:25:09.610: INFO: Deleting pod "exec-volume-test-preprovisionedpv-vnnm" in namespace "volume-7347" ... skipping 20 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":13,"skipped":130,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:14.096: INFO: Only supported for providers [gce gke] (not aws) ... skipping 20 lines ... [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:25:13.427: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-700d96ac-33c5-4855-9a7e-dde638de391a [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:14.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-8470" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":4,"skipped":6,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:15.346: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 33 lines ... Apr 16 04:24:00.852: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-7124mktk7 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7124 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7124mktk7,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7124 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7124mktk7,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: Creating a StorageClass [1mSTEP[0m: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7124 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7124mktk7,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-7124mktk7 b8fe3385-4642-4edc-a16e-828330d176bb 10802 0 2022-04-16 04:24:01 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update storage.k8s.io/v1 2022-04-16 04:24:01 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-s5txk pvc- provisioning-7124 486895c0-bb71-4713-8f79-5a4cc47c2188 10835 0 2022-04-16 04:24:01 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection] [{e2e.test Update v1 2022-04-16 04:24:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7124mktk7,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: Deleting pod pod-788d98a9-7e84-4a7e-b6ff-aa083a822720 in namespace provisioning-7124 [1mSTEP[0m: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil} Apr 16 04:24:29.457: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-5hj6x" in namespace "provisioning-7124" to be "Succeeded or Failed" Apr 16 04:24:29.692: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 234.95306ms Apr 16 04:24:31.927: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470442152s Apr 16 04:24:34.166: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708635301s Apr 16 04:24:36.404: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.946647689s Apr 16 04:24:38.640: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 9.18254221s Apr 16 04:24:40.875: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 11.418089434s Apr 16 04:24:43.111: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 13.65414102s Apr 16 04:24:45.347: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.889558409s Apr 16 04:24:47.582: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 18.125431736s Apr 16 04:24:49.819: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 20.361758172s Apr 16 04:24:52.055: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Pending", Reason="", readiness=false. Elapsed: 22.597661555s Apr 16 04:24:54.290: INFO: Pod "pvc-volume-tester-writer-5hj6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.833485752s [1mSTEP[0m: Saw pod success Apr 16 04:24:54.291: INFO: Pod "pvc-volume-tester-writer-5hj6x" satisfied condition "Succeeded or Failed" Apr 16 04:24:54.770: INFO: Pod pvc-volume-tester-writer-5hj6x has the following logs: Apr 16 04:24:54.770: INFO: Deleting pod "pvc-volume-tester-writer-5hj6x" in namespace "provisioning-7124" Apr 16 04:24:55.009: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-5hj6x" to be fully deleted [1mSTEP[0m: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-50-117.ap-south-1.compute.internal" Apr 16 04:24:55.955: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-5gjcv" in namespace "provisioning-7124" to be "Succeeded or Failed" Apr 16 04:24:56.189: INFO: Pod "pvc-volume-tester-reader-5gjcv": Phase="Pending", Reason="", readiness=false. Elapsed: 234.598764ms Apr 16 04:24:58.425: INFO: Pod "pvc-volume-tester-reader-5gjcv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470583802s Apr 16 04:25:00.662: INFO: Pod "pvc-volume-tester-reader-5gjcv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707119144s Apr 16 04:25:02.899: INFO: Pod "pvc-volume-tester-reader-5gjcv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.943955433s [1mSTEP[0m: Saw pod success Apr 16 04:25:02.899: INFO: Pod "pvc-volume-tester-reader-5gjcv" satisfied condition "Succeeded or Failed" Apr 16 04:25:03.378: INFO: Pod pvc-volume-tester-reader-5gjcv has the following logs: hello world Apr 16 04:25:03.378: INFO: Deleting pod "pvc-volume-tester-reader-5gjcv" in namespace "provisioning-7124" Apr 16 04:25:03.617: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-5gjcv" to be fully deleted Apr 16 04:25:03.852: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-s5txk] to have phase Bound Apr 16 04:25:04.089: INFO: PersistentVolumeClaim pvc-s5txk found and phase=Bound (236.827386ms) ... skipping 20 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision storage with mount options [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":6,"skipped":12,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:16.470: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 84 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m One pod requesting one prebound PVC [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209[0m should be able to mount volume and write from pod1 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":53,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:16.755: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 23 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 16 04:25:14.784: INFO: Waiting up to 5m0s for pod "pod-fd0cfea6-6346-49b1-b09d-f74f9c8486b9" in namespace "emptydir-2941" to be "Succeeded or Failed" Apr 16 04:25:15.023: INFO: Pod "pod-fd0cfea6-6346-49b1-b09d-f74f9c8486b9": Phase="Pending", Reason="", readiness=false. Elapsed: 239.115119ms Apr 16 04:25:17.264: INFO: Pod "pod-fd0cfea6-6346-49b1-b09d-f74f9c8486b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.480034106s [1mSTEP[0m: Saw pod success Apr 16 04:25:17.264: INFO: Pod "pod-fd0cfea6-6346-49b1-b09d-f74f9c8486b9" satisfied condition "Succeeded or Failed" Apr 16 04:25:17.504: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-fd0cfea6-6346-49b1-b09d-f74f9c8486b9 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:17.991: INFO: Waiting for pod pod-fd0cfea6-6346-49b1-b09d-f74f9c8486b9 to disappear Apr 16 04:25:18.231: INFO: Pod pod-fd0cfea6-6346-49b1-b09d-f74f9c8486b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48[0m files with FSGroup ownership should support (root,0644,tmpfs) [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":7,"skipped":29,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:18.745: INFO: Only supported for providers [vsphere] (not aws) ... skipping 273 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (block volmode)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":43,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:18.974: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 50 lines ... [32m• [SLOW TEST:11.929 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should mutate custom resource with different stored version [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":13,"skipped":67,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:19.444: INFO: Only supported for providers [openstack] (not aws) ... skipping 57 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:19.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "metrics-grabber-279" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":7,"skipped":17,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 19 lines ... Apr 16 04:25:03.386: INFO: PersistentVolumeClaim pvc-nnk5f found but phase is Pending instead of Bound. Apr 16 04:25:05.621: INFO: PersistentVolumeClaim pvc-nnk5f found and phase=Bound (11.414831689s) Apr 16 04:25:05.621: INFO: Waiting up to 3m0s for PersistentVolume local-6ch26 to have phase Bound Apr 16 04:25:05.859: INFO: PersistentVolume local-6ch26 found and phase=Bound (237.054463ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vkxb [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:25:06.567: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vkxb" in namespace "provisioning-7255" to be "Succeeded or Failed" Apr 16 04:25:06.802: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb": Phase="Pending", Reason="", readiness=false. Elapsed: 235.195908ms Apr 16 04:25:09.045: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47760862s Apr 16 04:25:11.281: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.714130306s Apr 16 04:25:13.518: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.950871181s [1mSTEP[0m: Saw pod success Apr 16 04:25:13.518: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb" satisfied condition "Succeeded or Failed" Apr 16 04:25:13.753: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-vkxb container test-container-subpath-preprovisionedpv-vkxb: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:14.233: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vkxb to disappear Apr 16 04:25:14.468: INFO: Pod pod-subpath-test-preprovisionedpv-vkxb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vkxb Apr 16 04:25:14.468: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vkxb" in namespace "provisioning-7255" [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-vkxb [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:25:14.945: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vkxb" in namespace "provisioning-7255" to be "Succeeded or Failed" Apr 16 04:25:15.180: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb": Phase="Pending", Reason="", readiness=false. Elapsed: 234.883107ms Apr 16 04:25:17.418: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.472885035s [1mSTEP[0m: Saw pod success Apr 16 04:25:17.418: INFO: Pod "pod-subpath-test-preprovisionedpv-vkxb" satisfied condition "Succeeded or Failed" Apr 16 04:25:17.653: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-vkxb container test-container-subpath-preprovisionedpv-vkxb: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:18.130: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vkxb to disappear Apr 16 04:25:18.367: INFO: Pod pod-subpath-test-preprovisionedpv-vkxb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-vkxb Apr 16 04:25:18.367: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vkxb" in namespace "provisioning-7255" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":13,"skipped":79,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:21.545: INFO: Driver hostPath doesn't support DynamicPV -- skipping ... skipping 31 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:21.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "node-lease-test-8333" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":8,"skipped":18,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:21.820: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 109 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode ... skipping 127 lines ... [32m• [SLOW TEST:32.938 seconds][0m [sig-api-machinery] Servers with support for API chunking [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should return chunks of results for list calls [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":10,"skipped":39,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:25:18.983: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward api env vars Apr 16 04:25:20.439: INFO: Waiting up to 5m0s for pod "downward-api-50f2ccf6-c3d1-4dd5-a5d5-c21b09fa3690" in namespace "downward-api-2331" to be "Succeeded or Failed" Apr 16 04:25:20.678: INFO: Pod "downward-api-50f2ccf6-c3d1-4dd5-a5d5-c21b09fa3690": Phase="Pending", Reason="", readiness=false. Elapsed: 238.122571ms Apr 16 04:25:22.917: INFO: Pod "downward-api-50f2ccf6-c3d1-4dd5-a5d5-c21b09fa3690": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.477658261s [1mSTEP[0m: Saw pod success Apr 16 04:25:22.917: INFO: Pod "downward-api-50f2ccf6-c3d1-4dd5-a5d5-c21b09fa3690" satisfied condition "Succeeded or Failed" Apr 16 04:25:23.155: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod downward-api-50f2ccf6-c3d1-4dd5-a5d5-c21b09fa3690 container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:23.637: INFO: Waiting for pod downward-api-50f2ccf6-c3d1-4dd5-a5d5-c21b09fa3690 to disappear Apr 16 04:25:23.879: INFO: Pod downward-api-50f2ccf6-c3d1-4dd5-a5d5-c21b09fa3690 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.381 seconds][0m [sig-node] Downward API [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should provide host IP as an env var [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:24.417: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 28 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: hostPathSymlink] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 77 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23[0m Granular Checks: Pods [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30[0m should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:24.657: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 177 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:26.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-4296" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":12,"skipped":113,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 27 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when create a pod with lifecycle hook [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43[0m should execute prestop exec hook properly [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:27.308: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 69 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:25:24.963: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c820a7ea-e88a-4209-8104-cf941627c130" in namespace "projected-6438" to be "Succeeded or Failed" Apr 16 04:25:25.199: INFO: Pod "downwardapi-volume-c820a7ea-e88a-4209-8104-cf941627c130": Phase="Pending", Reason="", readiness=false. Elapsed: 236.359292ms Apr 16 04:25:27.435: INFO: Pod "downwardapi-volume-c820a7ea-e88a-4209-8104-cf941627c130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.471767003s [1mSTEP[0m: Saw pod success Apr 16 04:25:27.435: INFO: Pod "downwardapi-volume-c820a7ea-e88a-4209-8104-cf941627c130" satisfied condition "Succeeded or Failed" Apr 16 04:25:27.669: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod downwardapi-volume-c820a7ea-e88a-4209-8104-cf941627c130 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:28.145: INFO: Waiting for pod downwardapi-volume-c820a7ea-e88a-4209-8104-cf941627c130 to disappear Apr 16 04:25:28.379: INFO: Pod downwardapi-volume-c820a7ea-e88a-4209-8104-cf941627c130 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.309 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":31,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 12 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:28.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "metrics-grabber-567" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":8,"skipped":62,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:29.109: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 70 lines ... [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating secret with name secret-test-35cabb66-2596-4597-8d99-25ab8818a817 [1mSTEP[0m: Creating a pod to test consume secrets Apr 16 04:25:26.012: INFO: Waiting up to 5m0s for pod "pod-secrets-03dd6d79-b2bd-47c7-93a8-785cae308943" in namespace "secrets-7001" to be "Succeeded or Failed" Apr 16 04:25:26.251: INFO: Pod "pod-secrets-03dd6d79-b2bd-47c7-93a8-785cae308943": Phase="Pending", Reason="", readiness=false. Elapsed: 239.305651ms Apr 16 04:25:28.490: INFO: Pod "pod-secrets-03dd6d79-b2bd-47c7-93a8-785cae308943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.478643193s [1mSTEP[0m: Saw pod success Apr 16 04:25:28.490: INFO: Pod "pod-secrets-03dd6d79-b2bd-47c7-93a8-785cae308943" satisfied condition "Succeeded or Failed" Apr 16 04:25:28.733: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-secrets-03dd6d79-b2bd-47c7-93a8-785cae308943 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:29.215: INFO: Waiting for pod pod-secrets-03dd6d79-b2bd-47c7-93a8-785cae308943 to disappear Apr 16 04:25:29.455: INFO: Pod pod-secrets-03dd6d79-b2bd-47c7-93a8-785cae308943 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 41 lines ... [32m• [SLOW TEST:13.077 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m listing mutating webhooks should work [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":14,"skipped":89,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 76 lines ... [32m• [SLOW TEST:31.268 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:32.980: INFO: Driver local doesn't support ext3 -- skipping ... skipping 35 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:25:35.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-5067" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 21 lines ... Apr 16 04:25:01.751: INFO: PersistentVolumeClaim pvc-dp6pn found but phase is Pending instead of Bound. Apr 16 04:25:03.987: INFO: PersistentVolumeClaim pvc-dp6pn found and phase=Bound (9.173982177s) Apr 16 04:25:03.987: INFO: Waiting up to 3m0s for PersistentVolume local-cm4xp to have phase Bound Apr 16 04:25:04.221: INFO: PersistentVolume local-cm4xp found and phase=Bound (234.00589ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-nb8w [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Apr 16 04:25:04.927: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nb8w" in namespace "provisioning-5800" to be "Succeeded or Failed" Apr 16 04:25:05.161: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Pending", Reason="", readiness=false. Elapsed: 233.946259ms Apr 16 04:25:07.396: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468952819s Apr 16 04:25:09.631: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 4.704402976s Apr 16 04:25:11.867: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 6.940076757s Apr 16 04:25:14.102: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 9.175332702s Apr 16 04:25:16.337: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 11.410121273s Apr 16 04:25:18.573: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 13.64615116s Apr 16 04:25:20.808: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 15.880662398s Apr 16 04:25:23.042: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 18.115221151s Apr 16 04:25:25.277: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 20.350066483s Apr 16 04:25:27.512: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Running", Reason="", readiness=true. Elapsed: 22.584624959s Apr 16 04:25:29.748: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.821024297s [1mSTEP[0m: Saw pod success Apr 16 04:25:29.748: INFO: Pod "pod-subpath-test-preprovisionedpv-nb8w" satisfied condition "Succeeded or Failed" Apr 16 04:25:29.984: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-nb8w container test-container-subpath-preprovisionedpv-nb8w: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:30.463: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nb8w to disappear Apr 16 04:25:30.701: INFO: Pod pod-subpath-test-preprovisionedpv-nb8w no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-nb8w Apr 16 04:25:30.701: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nb8w" in namespace "provisioning-5800" ... skipping 24 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":13,"skipped":130,"failed":0} [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":86,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:25:09.559: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 70 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":86,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:38.835: INFO: Only supported for providers [azure] (not aws) ... skipping 100 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume at the same time [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":44,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:40.746: INFO: Driver "csi-hostpath" does not support FsGroup - skipping ... skipping 28 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: tmpfs] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 36 lines ... [32m• [SLOW TEST:11.936 seconds][0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":9,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:25:41.167: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test override all Apr 16 04:25:42.604: INFO: Waiting up to 5m0s for pod "client-containers-dfb2a078-6465-4543-a899-4ac389fef2de" in namespace "containers-3813" to be "Succeeded or Failed" Apr 16 04:25:42.843: INFO: Pod "client-containers-dfb2a078-6465-4543-a899-4ac389fef2de": Phase="Pending", Reason="", readiness=false. Elapsed: 238.630013ms Apr 16 04:25:45.082: INFO: Pod "client-containers-dfb2a078-6465-4543-a899-4ac389fef2de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.477814883s [1mSTEP[0m: Saw pod success Apr 16 04:25:45.082: INFO: Pod "client-containers-dfb2a078-6465-4543-a899-4ac389fef2de" satisfied condition "Succeeded or Failed" Apr 16 04:25:45.322: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod client-containers-dfb2a078-6465-4543-a899-4ac389fef2de container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:45.811: INFO: Waiting for pod client-containers-dfb2a078-6465-4543-a899-4ac389fef2de to disappear Apr 16 04:25:46.049: INFO: Pod client-containers-dfb2a078-6465-4543-a899-4ac389fef2de no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.362 seconds][0m [sig-node] Docker Containers [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be able to override the image's default command and arguments [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":87,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes ... skipping 23 lines ... Apr 16 04:25:33.267: INFO: PersistentVolumeClaim pvc-fqcd4 found but phase is Pending instead of Bound. Apr 16 04:25:35.506: INFO: PersistentVolumeClaim pvc-fqcd4 found and phase=Bound (13.672580894s) Apr 16 04:25:35.506: INFO: Waiting up to 3m0s for PersistentVolume local-g8tfw to have phase Bound Apr 16 04:25:35.743: INFO: PersistentVolume local-g8tfw found and phase=Bound (237.281608ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-j8sx [1mSTEP[0m: Creating a pod to test exec-volume-test Apr 16 04:25:36.458: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-j8sx" in namespace "volume-7270" to be "Succeeded or Failed" Apr 16 04:25:36.696: INFO: Pod "exec-volume-test-preprovisionedpv-j8sx": Phase="Pending", Reason="", readiness=false. Elapsed: 237.903833ms Apr 16 04:25:38.935: INFO: Pod "exec-volume-test-preprovisionedpv-j8sx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.476142918s [1mSTEP[0m: Saw pod success Apr 16 04:25:38.935: INFO: Pod "exec-volume-test-preprovisionedpv-j8sx" satisfied condition "Succeeded or Failed" Apr 16 04:25:39.172: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod exec-volume-test-preprovisionedpv-j8sx container exec-container-preprovisionedpv-j8sx: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:40.314: INFO: Waiting for pod exec-volume-test-preprovisionedpv-j8sx to disappear Apr 16 04:25:40.552: INFO: Pod exec-volume-test-preprovisionedpv-j8sx no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-j8sx Apr 16 04:25:40.552: INFO: Deleting pod "exec-volume-test-preprovisionedpv-j8sx" in namespace "volume-7270" ... skipping 22 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":14,"skipped":143,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:46.613: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 138 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":12,"skipped":132,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 18 lines ... [32m• [SLOW TEST:14.319 seconds][0m [sig-api-machinery] ResourceQuota [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should create a ResourceQuota and capture the life of a persistent volume claim [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:480[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":9,"skipped":110,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:53.235: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 199 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should verify that all csinodes have volume limits [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:238[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":5,"skipped":9,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:25:53.338: INFO: Only supported for providers [gce gke] (not aws) ... skipping 58 lines ... [36mOnly supported for providers [azure] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1576 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":10,"skipped":67,"failed":0} [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:24:26.456: INFO: >>> kubeConfig: /root/.kube/config ... skipping 121 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":11,"skipped":67,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:01.821: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping ... skipping 37 lines ... Apr 16 04:26:03.520: INFO: pv is nil [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [1.678 seconds][0m [sig-storage] PersistentVolumes GCEPD [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [36m[1mshould test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127[0m [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 [90m------------------------------[0m ... skipping 226 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192[0m Two pods mounting a local volume one after the other [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254[0m should be able to write from pod1 and read from pod2 [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255[0m [90m------------------------------[0m {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":10,"skipped":54,"failed":0} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:26:03.898: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename nettest [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 10 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:26:07.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "nettest-3729" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":11,"skipped":54,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode ... skipping 65 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":85,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:08.902: INFO: Driver "csi-hostpath" does not support FsGroup - skipping ... skipping 21 lines ... Apr 16 04:26:03.554: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 16 04:26:04.989: INFO: Waiting up to 5m0s for pod "security-context-248d81cf-f907-4d5c-ba6b-19bf234a1465" in namespace "security-context-5554" to be "Succeeded or Failed" Apr 16 04:26:05.227: INFO: Pod "security-context-248d81cf-f907-4d5c-ba6b-19bf234a1465": Phase="Pending", Reason="", readiness=false. Elapsed: 238.138096ms Apr 16 04:26:07.467: INFO: Pod "security-context-248d81cf-f907-4d5c-ba6b-19bf234a1465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.478297363s [1mSTEP[0m: Saw pod success Apr 16 04:26:07.468: INFO: Pod "security-context-248d81cf-f907-4d5c-ba6b-19bf234a1465" satisfied condition "Succeeded or Failed" Apr 16 04:26:07.706: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod security-context-248d81cf-f907-4d5c-ba6b-19bf234a1465 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:26:08.187: INFO: Waiting for pod security-context-248d81cf-f907-4d5c-ba6b-19bf234a1465 to disappear Apr 16 04:26:08.425: INFO: Pod security-context-248d81cf-f907-4d5c-ba6b-19bf234a1465 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 6 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support seccomp unconfined on the pod [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":12,"skipped":75,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:08.925: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 95 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: local][LocalVolumeType: dir-link-bindmounted] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (delayed binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 29 lines ... Apr 16 04:25:33.383: INFO: Unable to read jessie_udp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:33.622: INFO: Unable to read jessie_tcp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:33.863: INFO: Unable to read jessie_udp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:34.104: INFO: Unable to read jessie_tcp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:34.342: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:34.581: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:36.020: INFO: Lookups using dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6992 wheezy_tcp@dns-test-service.dns-6992 wheezy_udp@dns-test-service.dns-6992.svc wheezy_tcp@dns-test-service.dns-6992.svc wheezy_udp@_http._tcp.dns-test-service.dns-6992.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6992.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6992 jessie_tcp@dns-test-service.dns-6992 jessie_udp@dns-test-service.dns-6992.svc jessie_tcp@dns-test-service.dns-6992.svc jessie_udp@_http._tcp.dns-test-service.dns-6992.svc jessie_tcp@_http._tcp.dns-test-service.dns-6992.svc] Apr 16 04:25:41.259: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:41.496: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:41.734: INFO: Unable to read wheezy_udp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:41.972: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:42.210: INFO: Unable to read wheezy_udp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) ... skipping 5 lines ... Apr 16 04:25:45.127: INFO: Unable to read jessie_udp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:45.365: INFO: Unable to read jessie_tcp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:45.603: INFO: Unable to read jessie_udp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:45.841: INFO: Unable to read jessie_tcp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:46.079: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:46.317: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:47.752: INFO: Lookups using dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6992 wheezy_tcp@dns-test-service.dns-6992 wheezy_udp@dns-test-service.dns-6992.svc wheezy_tcp@dns-test-service.dns-6992.svc wheezy_udp@_http._tcp.dns-test-service.dns-6992.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6992.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6992 jessie_tcp@dns-test-service.dns-6992 jessie_udp@dns-test-service.dns-6992.svc jessie_tcp@dns-test-service.dns-6992.svc jessie_udp@_http._tcp.dns-test-service.dns-6992.svc jessie_tcp@_http._tcp.dns-test-service.dns-6992.svc] Apr 16 04:25:51.259: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:51.496: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:51.734: INFO: Unable to read wheezy_udp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:51.972: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:52.210: INFO: Unable to read wheezy_udp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) ... skipping 5 lines ... Apr 16 04:25:55.072: INFO: Unable to read jessie_udp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:55.311: INFO: Unable to read jessie_tcp@dns-test-service.dns-6992 from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:55.552: INFO: Unable to read jessie_udp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:55.790: INFO: Unable to read jessie_tcp@dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:56.027: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:56.265: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6992.svc from pod dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124: the server could not find the requested resource (get pods dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124) Apr 16 04:25:57.694: INFO: Lookups using dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6992 wheezy_tcp@dns-test-service.dns-6992 wheezy_udp@dns-test-service.dns-6992.svc wheezy_tcp@dns-test-service.dns-6992.svc wheezy_udp@_http._tcp.dns-test-service.dns-6992.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6992.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6992 jessie_tcp@dns-test-service.dns-6992 jessie_udp@dns-test-service.dns-6992.svc jessie_tcp@dns-test-service.dns-6992.svc jessie_udp@_http._tcp.dns-test-service.dns-6992.svc jessie_tcp@_http._tcp.dns-test-service.dns-6992.svc] Apr 16 04:26:07.743: INFO: DNS probes using dns-6992/dns-test-ce32d24b-e8e2-4adf-8081-b3bf6e752124 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test service [1mSTEP[0m: deleting the test headless service ... skipping 8 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":44,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:09.006: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping ... skipping 157 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:26:05.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cafd9df-7a10-4f40-af81-bb9fa83eac23" in namespace "downward-api-4454" to be "Succeeded or Failed" Apr 16 04:26:05.773: INFO: Pod "downwardapi-volume-7cafd9df-7a10-4f40-af81-bb9fa83eac23": Phase="Pending", Reason="", readiness=false. Elapsed: 238.760459ms Apr 16 04:26:08.013: INFO: Pod "downwardapi-volume-7cafd9df-7a10-4f40-af81-bb9fa83eac23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.47917501s [1mSTEP[0m: Saw pod success Apr 16 04:26:08.014: INFO: Pod "downwardapi-volume-7cafd9df-7a10-4f40-af81-bb9fa83eac23" satisfied condition "Succeeded or Failed" Apr 16 04:26:08.252: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod downwardapi-volume-7cafd9df-7a10-4f40-af81-bb9fa83eac23 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:26:08.742: INFO: Waiting for pod downwardapi-volume-7cafd9df-7a10-4f40-af81-bb9fa83eac23 to disappear Apr 16 04:26:08.983: INFO: Pod downwardapi-volume-7cafd9df-7a10-4f40-af81-bb9fa83eac23 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:5.365 seconds][0m [sig-storage] Downward API volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":92,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:09.481: INFO: Driver hostPath doesn't support ext3 -- skipping ... skipping 32 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:26:11.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-8750" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":16,"skipped":96,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 19 lines ... [32m• [SLOW TEST:25.250 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should be restarted with a local redirect http liveness probe [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:280[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":15,"skipped":164,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client ... skipping 32 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23[0m Listing PodDisruptionBudgets for all namespaces [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75[0m should list and delete a collection of PodDisruptionBudgets [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":12,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 16 lines ... Apr 16 04:26:01.982: INFO: PersistentVolumeClaim pvc-kjfw9 found but phase is Pending instead of Bound. Apr 16 04:26:04.220: INFO: PersistentVolumeClaim pvc-kjfw9 found and phase=Bound (4.717116957s) Apr 16 04:26:04.220: INFO: Waiting up to 3m0s for PersistentVolume local-gq7kw to have phase Bound Apr 16 04:26:04.458: INFO: PersistentVolume local-gq7kw found and phase=Bound (237.144388ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-nbbj [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:26:05.173: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nbbj" in namespace "provisioning-4203" to be "Succeeded or Failed" Apr 16 04:26:05.410: INFO: Pod "pod-subpath-test-preprovisionedpv-nbbj": Phase="Pending", Reason="", readiness=false. Elapsed: 237.165831ms Apr 16 04:26:07.648: INFO: Pod "pod-subpath-test-preprovisionedpv-nbbj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474918202s Apr 16 04:26:09.886: INFO: Pod "pod-subpath-test-preprovisionedpv-nbbj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.713152324s [1mSTEP[0m: Saw pod success Apr 16 04:26:09.886: INFO: Pod "pod-subpath-test-preprovisionedpv-nbbj" satisfied condition "Succeeded or Failed" Apr 16 04:26:10.124: INFO: Trying to get logs from node ip-172-20-56-43.ap-south-1.compute.internal pod pod-subpath-test-preprovisionedpv-nbbj container test-container-subpath-preprovisionedpv-nbbj: <nil> [1mSTEP[0m: delete the pod Apr 16 04:26:10.613: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nbbj to disappear Apr 16 04:26:10.851: INFO: Pod pod-subpath-test-preprovisionedpv-nbbj no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-nbbj Apr 16 04:26:10.851: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nbbj" in namespace "provisioning-4203" ... skipping 19 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":16,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:14.069: INFO: Driver local doesn't support ext4 -- skipping ... skipping 80 lines ... Apr 16 04:25:34.580: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath88kxh] to have phase Bound Apr 16 04:25:34.815: INFO: PersistentVolumeClaim csi-hostpath88kxh found but phase is Pending instead of Bound. Apr 16 04:25:37.051: INFO: PersistentVolumeClaim csi-hostpath88kxh found but phase is Pending instead of Bound. Apr 16 04:25:39.287: INFO: PersistentVolumeClaim csi-hostpath88kxh found and phase=Bound (4.706459215s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ztvn [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:25:39.995: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ztvn" in namespace "provisioning-7785" to be "Succeeded or Failed" Apr 16 04:25:40.230: INFO: Pod "pod-subpath-test-dynamicpv-ztvn": Phase="Pending", Reason="", readiness=false. Elapsed: 234.675741ms Apr 16 04:25:42.465: INFO: Pod "pod-subpath-test-dynamicpv-ztvn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469833573s Apr 16 04:25:44.701: INFO: Pod "pod-subpath-test-dynamicpv-ztvn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.705561068s Apr 16 04:25:46.937: INFO: Pod "pod-subpath-test-dynamicpv-ztvn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.941520813s Apr 16 04:25:49.175: INFO: Pod "pod-subpath-test-dynamicpv-ztvn": Phase="Pending", Reason="", readiness=false. Elapsed: 9.179728379s Apr 16 04:25:51.411: INFO: Pod "pod-subpath-test-dynamicpv-ztvn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.41515349s [1mSTEP[0m: Saw pod success Apr 16 04:25:51.411: INFO: Pod "pod-subpath-test-dynamicpv-ztvn" satisfied condition "Succeeded or Failed" Apr 16 04:25:51.647: INFO: Trying to get logs from node ip-172-20-40-167.ap-south-1.compute.internal pod pod-subpath-test-dynamicpv-ztvn container test-container-subpath-dynamicpv-ztvn: <nil> [1mSTEP[0m: delete the pod Apr 16 04:25:52.135: INFO: Waiting for pod pod-subpath-test-dynamicpv-ztvn to disappear Apr 16 04:25:52.370: INFO: Pod pod-subpath-test-dynamicpv-ztvn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-ztvn Apr 16 04:25:52.370: INFO: Deleting pod "pod-subpath-test-dynamicpv-ztvn" in namespace "provisioning-7785" ... skipping 60 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":14,"skipped":86,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:16.246: INFO: Driver local doesn't support InlineVolume -- skipping ... skipping 23 lines ... Apr 16 04:26:09.015: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 [1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 16 04:26:10.434: INFO: Waiting up to 5m0s for pod "security-context-639e1c94-8171-44f5-9008-f4ce818da9f7" in namespace "security-context-6665" to be "Succeeded or Failed" Apr 16 04:26:10.669: INFO: Pod "security-context-639e1c94-8171-44f5-9008-f4ce818da9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 235.438593ms Apr 16 04:26:12.906: INFO: Pod "security-context-639e1c94-8171-44f5-9008-f4ce818da9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471918287s Apr 16 04:26:15.142: INFO: Pod "security-context-639e1c94-8171-44f5-9008-f4ce818da9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708831405s Apr 16 04:26:17.381: INFO: Pod "security-context-639e1c94-8171-44f5-9008-f4ce818da9f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.947276327s [1mSTEP[0m: Saw pod success Apr 16 04:26:17.381: INFO: Pod "security-context-639e1c94-8171-44f5-9008-f4ce818da9f7" satisfied condition "Succeeded or Failed" Apr 16 04:26:17.617: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod security-context-639e1c94-8171-44f5-9008-f4ce818da9f7 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:26:18.096: INFO: Waiting for pod security-context-639e1c94-8171-44f5-9008-f4ce818da9f7 to disappear Apr 16 04:26:18.332: INFO: Pod security-context-639e1c94-8171-44f5-9008-f4ce818da9f7 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:9.790 seconds][0m [sig-node] Security Context [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23[0m should support seccomp runtime/default [LinuxOnly] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":10,"skipped":104,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:18.824: INFO: Only supported for providers [gce gke] (not aws) ... skipping 78 lines ... [32m• [SLOW TEST:62.161 seconds][0m [sig-node] Probing container [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":71,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:21.176: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) ... skipping 199 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m CSI Volume expansion [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562[0m should expand volume by restarting pod if attach=on, nodeExpansion=on [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":10,"skipped":55,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:26:09.040: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 [1mSTEP[0m: Creating configMap with name configmap-test-volume-bb611f46-1719-418e-a97c-cfe518381ba8 [1mSTEP[0m: Creating a pod to test consume configMaps Apr 16 04:26:10.727: INFO: Waiting up to 5m0s for pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263" in namespace "configmap-1889" to be "Succeeded or Failed" Apr 16 04:26:10.966: INFO: Pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263": Phase="Pending", Reason="", readiness=false. Elapsed: 238.565416ms Apr 16 04:26:13.204: INFO: Pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477475752s Apr 16 04:26:15.445: INFO: Pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71757893s Apr 16 04:26:17.683: INFO: Pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956353577s Apr 16 04:26:19.928: INFO: Pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200926085s Apr 16 04:26:22.168: INFO: Pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.441096974s [1mSTEP[0m: Saw pod success Apr 16 04:26:22.168: INFO: Pod "pod-configmaps-80ba154c-e770-49e8-b449-9947337da263" satisfied condition "Succeeded or Failed" Apr 16 04:26:22.407: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-configmaps-80ba154c-e770-49e8-b449-9947337da263 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:26:22.892: INFO: Waiting for pod pod-configmaps-80ba154c-e770-49e8-b449-9947337da263 to disappear Apr 16 04:26:23.131: INFO: Pod pod-configmaps-80ba154c-e770-49e8-b449-9947337da263 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:14.582 seconds][0m [sig-storage] ConfigMap [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":13,"skipped":95,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:23.649: INFO: Only supported for providers [gce gke] (not aws) ... skipping 31 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:26:23.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-24" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":11,"skipped":62,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:26:11.937: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename security-context-test [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 04:26:13.374: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a" in namespace "security-context-test-20" to be "Succeeded or Failed" Apr 16 04:26:13.613: INFO: Pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 238.902043ms Apr 16 04:26:15.853: INFO: Pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479204036s Apr 16 04:26:18.095: INFO: Pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.720653556s Apr 16 04:26:20.334: INFO: Pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959354304s Apr 16 04:26:22.573: INFO: Pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.198703755s Apr 16 04:26:24.813: INFO: Pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.438635839s Apr 16 04:26:24.813: INFO: Pod "alpine-nnp-false-a1099b85-f45d-44a5-b1db-092e54e30c3a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:26:25.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-20" for this suite. ... skipping 2 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m when creating containers with AllowPrivilegeEscalation [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296[0m should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":103,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:25.550: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 160 lines ... [It] should support readOnly directory specified in the volumeMount /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365 Apr 16 04:26:17.459: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Apr 16 04:26:17.694: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-sjtk [1mSTEP[0m: Creating a pod to test subpath Apr 16 04:26:17.932: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-sjtk" in namespace "provisioning-2464" to be "Succeeded or Failed" Apr 16 04:26:18.170: INFO: Pod "pod-subpath-test-inlinevolume-sjtk": Phase="Pending", Reason="", readiness=false. Elapsed: 238.097625ms Apr 16 04:26:20.405: INFO: Pod "pod-subpath-test-inlinevolume-sjtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473359369s Apr 16 04:26:22.641: INFO: Pod "pod-subpath-test-inlinevolume-sjtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.708945971s Apr 16 04:26:24.877: INFO: Pod "pod-subpath-test-inlinevolume-sjtk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.944611024s Apr 16 04:26:27.112: INFO: Pod "pod-subpath-test-inlinevolume-sjtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.180047632s [1mSTEP[0m: Saw pod success Apr 16 04:26:27.112: INFO: Pod "pod-subpath-test-inlinevolume-sjtk" satisfied condition "Succeeded or Failed" Apr 16 04:26:27.347: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod pod-subpath-test-inlinevolume-sjtk container test-container-subpath-inlinevolume-sjtk: <nil> [1mSTEP[0m: delete the pod Apr 16 04:26:27.826: INFO: Waiting for pod pod-subpath-test-inlinevolume-sjtk to disappear Apr 16 04:26:28.065: INFO: Pod pod-subpath-test-inlinevolume-sjtk no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-sjtk Apr 16 04:26:28.065: INFO: Deleting pod "pod-subpath-test-inlinevolume-sjtk" in namespace "provisioning-2464" ... skipping 12 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Inline-volume (default fs)] subPath [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":15,"skipped":93,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:29.043: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern ... skipping 154 lines ... [32m• [SLOW TEST:11.284 seconds][0m [sig-auth] ServiceAccounts [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23[0m should ensure a single API token exists [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52[0m [90m------------------------------[0m {"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":11,"skipped":109,"failed":0} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:30.160: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 79 lines ... [sig-storage] In-tree Volumes [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23[0m [Driver: gluster] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (immediate binding)] topology [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263 [90m------------------------------[0m ... skipping 29 lines ... [32m• [SLOW TEST:5.904 seconds][0m [sig-node] RuntimeClass [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23[0m should support RuntimeClasses API operations [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":16,"skipped":109,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:35.070: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 289 lines ... [32m• [SLOW TEST:16.149 seconds][0m [sig-api-machinery] Garbage collector [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m should delete pods created by rc when not orphaning [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":14,"skipped":99,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:39.843: INFO: Driver emptydir doesn't support DynamicPV -- skipping ... skipping 85 lines ... [32m• [SLOW TEST:126.443 seconds][0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23[0m works for multiple CRDs of same group but different versions [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy ... skipping 78 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":10,"skipped":34,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:41.029: INFO: Only supported for providers [gce gke] (not aws) ... skipping 147 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":10,"skipped":74,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:41.239: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 49 lines ... /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 04:26:41.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "request-timeout-1468" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":3,"skipped":4,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:42.111: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping ... skipping 33 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380[0m [36mOnly supported for providers [vsphere] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438 [90m------------------------------[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":40,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:25:29.949: INFO: >>> kubeConfig: /root/.kube/config ... skipping 152 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":12,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:42.579: INFO: Driver local doesn't support DynamicPV -- skipping ... skipping 192 lines ... [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208[0m [36mDriver local doesn't support DynamicPV -- skipping[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":13,"skipped":137,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:26:35.616: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename projected [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 16 04:26:37.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884" in namespace "projected-3079" to be "Succeeded or Failed" Apr 16 04:26:37.287: INFO: Pod "downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884": Phase="Pending", Reason="", readiness=false. Elapsed: 235.071915ms Apr 16 04:26:39.524: INFO: Pod "downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471195841s Apr 16 04:26:41.759: INFO: Pod "downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.706767312s [1mSTEP[0m: Saw pod success Apr 16 04:26:41.759: INFO: Pod "downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884" satisfied condition "Succeeded or Failed" Apr 16 04:26:41.994: INFO: Trying to get logs from node ip-172-20-50-117.ap-south-1.compute.internal pod downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 16 04:26:42.472: INFO: Waiting for pod downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884 to disappear Apr 16 04:26:42.707: INFO: Pod downwardapi-volume-779f415a-10c4-40c6-9059-d7f9fd2e1884 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 4 lines ... [32m• [SLOW TEST:7.563 seconds][0m [sig-storage] Projected downwardAPI [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23[0m should provide podname only [NodeConformance] [Conformance] [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":137,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:43.199: INFO: Only supported for providers [openstack] (not aws) ... skipping 49 lines ... [32m• [SLOW TEST:9.692 seconds][0m [sig-network] Services [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23[0m should release NodePorts on delete [90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1582[0m [90m------------------------------[0m {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":17,"skipped":118,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Apr 16 04:26:44.828: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 ... skipping 53 lines ... [36mOnly supported for providers [gce gke] (not aws)[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":131,"failed":0} [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Apr 16 04:26:06.917: INFO: >>> kubeConfig: /root/.kube/config [1mSTEP[0m: Building a namespace api object, basename mounted-volume-expand [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace ... skipping 40721 lines ... service.go:301] Service resourcequota-1343/test-service updated: 0 ports\nI0416 04:35:28.412396 1 service.go:441] Removing service port \"resourcequota-1343/test-service\"\nI0416 04:35:28.412521 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:28.463617 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.195681ms\"\nI0416 04:35:28.662228 1 service.go:301] Service resourcequota-1343/test-service-np updated: 0 ports\nI0416 04:35:28.662267 1 service.go:441] Removing service port \"resourcequota-1343/test-service-np\"\nI0416 04:35:28.662632 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:28.714739 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"52.459346ms\"\nI0416 04:35:30.726020 1 service.go:301] Service services-3993/nodeport-range-test updated: 1 ports\nI0416 04:35:30.726150 1 service.go:416] Adding new service port \"services-3993/nodeport-range-test\" at 100.66.29.148:80/TCP\nI0416 04:35:30.726481 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:30.755940 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-3993/nodeport-range-test\\\" (:30177/tcp4)\"\nI0416 04:35:30.761847 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.691758ms\"\nI0416 04:35:30.762074 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:30.801576 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.627986ms\"\nI0416 04:35:31.433651 1 service.go:301] Service services-3993/nodeport-range-test updated: 0 ports\nI0416 04:35:31.801837 1 service.go:441] Removing service port \"services-3993/nodeport-range-test\"\nI0416 04:35:31.802127 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:31.839873 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.03916ms\"\nI0416 04:35:38.726075 1 service.go:301] Service services-2553/nodeport-update-service updated: 0 ports\nI0416 04:35:38.726147 1 service.go:441] Removing service port \"services-2553/nodeport-update-service:tcp-port\"\nI0416 04:35:38.726165 1 service.go:441] Removing service port \"services-2553/nodeport-update-service:udp-port\"\nI0416 04:35:38.726289 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:38.824772 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"98.64314ms\"\nI0416 04:35:38.824954 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:38.869973 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.158329ms\"\nI0416 04:35:47.636200 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:47.691298 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.172374ms\"\nI0416 04:35:47.878216 1 service.go:301] Service services-1477/sourceip-test updated: 0 ports\nI0416 04:35:47.878252 1 service.go:441] Removing service port \"services-1477/sourceip-test\"\nI0416 04:35:47.878374 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:47.933443 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.176088ms\"\nI0416 04:35:48.934306 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:48.966577 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.383952ms\"\nI0416 04:35:52.725374 1 service.go:301] Service webhook-7388/e2e-test-webhook updated: 1 ports\nI0416 04:35:52.725484 1 service.go:416] Adding new service port \"webhook-7388/e2e-test-webhook\" at 100.64.25.135:8443/TCP\nI0416 04:35:52.725681 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:52.778968 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.482308ms\"\nI0416 04:35:52.779217 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:52.818403 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.343944ms\"\nI0416 04:35:56.706788 1 service.go:301] Service kubectl-8920/agnhost-primary updated: 1 ports\nI0416 04:35:56.706831 1 service.go:416] Adding new service port \"kubectl-8920/agnhost-primary\" at 100.70.175.221:6379/TCP\nI0416 04:35:56.706963 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:56.754154 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.317101ms\"\nI0416 04:35:56.754355 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:56.789219 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.018312ms\"\nI0416 04:35:57.301716 1 service.go:301] Service webhook-7388/e2e-test-webhook updated: 0 ports\nI0416 04:35:57.790125 1 service.go:441] Removing service port \"webhook-7388/e2e-test-webhook\"\nI0416 04:35:57.790297 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:57.829881 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.744089ms\"\nI0416 04:36:11.497661 1 service.go:301] Service services-5823/tolerate-unready updated: 1 ports\nI0416 04:36:11.497706 1 service.go:416] Adding new service port \"services-5823/tolerate-unready:http\" at 100.67.60.139:80/TCP\nI0416 04:36:11.497829 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:11.533624 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.913836ms\"\nI0416 04:36:11.533784 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:11.582624 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.955631ms\"\nI0416 04:36:12.182549 1 service.go:301] Service kubectl-8920/agnhost-primary updated: 0 ports\nI0416 04:36:12.583438 1 service.go:441] Removing service port \"kubectl-8920/agnhost-primary\"\nI0416 04:36:12.583774 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:12.755398 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"172.017831ms\"\nI0416 04:36:13.946124 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:13.980535 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.480666ms\"\nI0416 04:36:15.458342 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:15.459052 1 service.go:301] Service services-6780/service-headless-toggled updated: 0 ports\nI0416 04:36:15.582051 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"123.942429ms\"\nI0416 04:36:15.582086 1 service.go:441] Removing service port \"services-6780/service-headless-toggled\"\nI0416 04:36:15.583285 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:15.720735 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"138.638782ms\"\nI0416 04:36:22.511653 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:22.579258 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"67.73079ms\"\nI0416 04:36:22.579378 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:22.611851 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.554711ms\"\nI0416 04:36:22.731442 1 service.go:301] Service services-6926/service-proxy-toggled updated: 0 ports\nI0416 04:36:23.612070 1 service.go:441] Removing service port \"services-6926/service-proxy-toggled\"\nI0416 04:36:23.612209 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:23.695503 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"83.43099ms\"\nI0416 04:36:27.248504 1 service.go:301] Service services-1817/externalip-test updated: 0 ports\nI0416 04:36:27.248631 1 service.go:441] Removing service port \"services-1817/externalip-test:http\"\nI0416 04:36:27.248781 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:27.288758 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.198601ms\"\nI0416 04:36:27.289126 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:27.331602 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.743593ms\"\nI0416 04:36:33.772794 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:33.849968 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"77.287377ms\"\nI0416 04:36:34.000747 1 service.go:301] Service services-31/endpoint-test2 updated: 0 ports\nI0416 04:36:34.000792 1 service.go:441] Removing service port \"services-31/endpoint-test2\"\nI0416 04:36:34.000910 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:34.093936 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"93.129827ms\"\nI0416 04:36:35.094384 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:35.126335 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.10277ms\"\nI0416 04:36:39.736759 1 service.go:301] Service endpointslice-5636/example-empty-selector updated: 1 ports\nI0416 04:36:39.736933 1 service.go:416] Adding new service port \"endpointslice-5636/example-empty-selector:example\" at 100.64.7.57:80/TCP\nI0416 04:36:39.737351 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:39.853989 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"117.053139ms\"\nI0416 04:36:39.854246 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:39.940692 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"86.597074ms\"\nI0416 04:36:40.440627 1 service.go:301] Service endpointslice-5636/example-empty-selector updated: 0 ports\nI0416 04:36:40.941835 1 service.go:441] Removing service port \"endpointslice-5636/example-empty-selector:example\"\nI0416 04:36:40.941941 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:40.971807 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.973494ms\"\nI0416 04:37:02.485521 1 service.go:301] Service aggregator-3805/sample-api updated: 1 ports\nI0416 04:37:02.485646 1 service.go:416] Adding new service port \"aggregator-3805/sample-api\" at 100.71.142.186:7443/TCP\nI0416 04:37:02.485825 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:37:02.566466 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"80.808629ms\"\nI0416 04:37:02.566706 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:37:02.602001 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.367034ms\"\nI0416 04:37:03.775675 1 service.go:301] Service apply-3502/test-svc updated: 1 ports\nI0416 04:37:03.775721 1 service.go:416] Adding new service port \"apply-3502/test-svc\" at 100.68.162.205:8080/UDP\nI0416 04:37:03.775861 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:37:03.838204 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"62.464432ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-40-167.ap-south-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-42-21.ap-south-1.compute.internal ====\nI0416 04:14:16.330555 1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0416 04:14:16.341560 1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0416 04:14:16.341572 1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0416 04:14:16.341582 1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0416 04:14:16.341588 1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0416 04:14:16.341592 1 flags.go:59] FLAG: --cleanup=\"false\"\nI0416 04:14:16.341596 1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0416 04:14:16.341602 1 flags.go:59] FLAG: --config=\"\"\nI0416 04:14:16.341605 1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0416 04:14:16.341611 1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0416 04:14:16.341616 1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0416 04:14:16.341620 1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0416 04:14:16.341624 1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0416 04:14:16.341628 1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0416 04:14:16.341632 1 flags.go:59] FLAG: --feature-gates=\"\"\nI0416 04:14:16.341638 1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0416 04:14:16.341643 1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0416 04:14:16.341647 1 flags.go:59] FLAG: --help=\"false\"\nI0416 04:14:16.341650 1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-42-21.ap-south-1.compute.internal\"\nI0416 04:14:16.341656 1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0416 04:14:16.341659 1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0416 04:14:16.341663 1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0416 04:14:16.341667 1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0416 04:14:16.341678 1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0416 04:14:16.341681 1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0416 04:14:16.341685 1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0416 04:14:16.341688 1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0416 04:14:16.341692 1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0416 04:14:16.341696 1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0416 04:14:16.341699 1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0416 04:14:16.341703 1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0416 04:14:16.341707 1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0416 04:14:16.341711 1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0416 04:14:16.341718 1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0416 04:14:16.341722 1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0416 04:14:16.341729 1 flags.go:59] FLAG: --log-dir=\"\"\nI0416 04:14:16.341733 1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0416 04:14:16.341737 1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0416 04:14:16.341741 1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0416 04:14:16.341746 1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0416 04:14:16.341751 1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0416 04:14:16.341757 1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0416 04:14:16.341761 1 flags.go:59] FLAG: --master=\"https://127.0.0.1\"\nI0416 04:14:16.341765 1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0416 04:14:16.341770 1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0416 04:14:16.341774 1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0416 04:14:16.341781 1 flags.go:59] FLAG: --one-output=\"false\"\nI0416 04:14:16.341785 1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0416 04:14:16.341789 1 flags.go:59] FLAG: --profiling=\"false\"\nI0416 04:14:16.341792 1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0416 04:14:16.341801 1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0416 04:14:16.341806 1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0416 04:14:16.341810 1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0416 04:14:16.341815 1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0416 04:14:16.341820 1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0416 04:14:16.341823 1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0416 04:14:16.341827 1 flags.go:59] FLAG: --v=\"2\"\nI0416 04:14:16.341831 1 flags.go:59] FLAG: --version=\"false\"\nI0416 04:14:16.341836 1 flags.go:59] FLAG: --vmodule=\"\"\nI0416 04:14:16.341840 1 flags.go:59] FLAG: --write-config-to=\"\"\nW0416 04:14:16.341851 1 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0416 04:14:16.341943 1 feature_gate.go:245] feature gates: &{map[]}\nI0416 04:14:16.342027 1 feature_gate.go:245] feature gates: &{map[]}\nE0416 04:14:26.501358 1 node.go:161] Failed to retrieve node info: Get \"https://127.0.0.1/api/v1/nodes/ip-172-20-42-21.ap-south-1.compute.internal\": net/http: TLS handshake timeout\nE0416 04:14:48.354506 1 node.go:161] Failed to retrieve node info: Get \"https://127.0.0.1/api/v1/nodes/ip-172-20-42-21.ap-south-1.compute.internal\": net/http: TLS handshake timeout\nE0416 04:14:55.200787 1 node.go:161] Failed to retrieve node info: nodes \"ip-172-20-42-21.ap-south-1.compute.internal\" is forbidden: User \"system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope\nE0416 04:14:59.671911 1 node.go:161] Failed to retrieve node info: nodes \"ip-172-20-42-21.ap-south-1.compute.internal\" not found\nI0416 04:15:07.776200 1 node.go:172] Successfully retrieved node IP: 172.20.42.21\nI0416 04:15:07.776225 1 server_others.go:140] Detected node IP 172.20.42.21\nW0416 04:15:07.776917 1 server_others.go:565] Unknown proxy mode \"\", assuming iptables proxy\nI0416 04:15:07.777025 1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI0416 04:15:07.848484 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI0416 04:15:07.848509 1 server_others.go:212] Using iptables Proxier.\nI0416 04:15:07.848536 1 server_others.go:219] creating dualStackProxier for iptables.\nW0416 04:15:07.849286 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI0416 04:15:07.850014 1 utils.go:370] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0416 04:15:07.850823 1 proxier.go:276] \"Missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI0416 04:15:07.850843 1 proxier.go:282] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0416 04:15:07.850881 1 proxier.go:328] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0416 04:15:07.850917 1 proxier.go:338] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0416 04:15:07.850967 1 proxier.go:276] \"Missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI0416 04:15:07.850978 1 proxier.go:282] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0416 04:15:07.851003 1 proxier.go:328] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0416 04:15:07.851020 1 proxier.go:338] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0416 04:15:07.854909 1 server.go:649] Version: v1.22.8\nI0416 04:15:07.859189 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0416 04:15:07.859217 1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0416 04:15:07.859291 1 mount_linux.go:207] Detected OS without systemd\nI0416 04:15:07.859480 1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0416 04:15:07.872591 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0416 04:15:07.872641 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0416 04:15:07.874118 1 config.go:315] Starting service config controller\nI0416 04:15:07.874672 1 shared_informer.go:240] Waiting for caches to sync for service config\nI0416 04:15:07.874751 1 config.go:224] Starting endpoint slice config controller\nI0416 04:15:07.874796 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0416 04:15:07.879495 1 service.go:301] Service default/kubernetes updated: 1 ports\nI0416 04:15:07.879583 1 service.go:301] Service kube-system/kube-dns updated: 3 ports\nI0416 04:15:07.975450 1 shared_informer.go:247] Caches are synced for service config \nI0416 04:15:07.975454 1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0416 04:15:07.976353 1 proxier.go:805] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0416 04:15:07.976429 1 proxier.go:805] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0416 04:15:07.976487 1 service.go:416] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0416 04:15:07.976509 1 service.go:416] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0416 04:15:07.976521 1 service.go:416] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0416 04:15:07.976530 1 service.go:416] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0416 04:15:07.976575 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:15:08.031902 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.436981ms\"\nI0416 04:15:08.031938 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:15:08.059073 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.14706ms\"\nI0416 04:15:18.529054 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:15:18.584786 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.760309ms\"\nI0416 04:16:56.071879 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:56.112405 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.587338ms\"\nI0416 04:16:56.113148 1 proxier.go:830] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0416 04:16:56.113183 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:56.156859 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"44.425401ms\"\nI0416 04:16:57.157468 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:57.192547 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.105969ms\"\nI0416 04:16:58.193682 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:58.239562 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.950766ms\"\nI0416 04:20:23.740984 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:23.743288 1 service.go:301] Service services-5694/hairpin-test updated: 1 ports\nI0416 04:20:23.772791 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.825872ms\"\nI0416 04:20:23.772968 1 service.go:416] Adding new service port \"services-5694/hairpin-test\" at 100.71.71.24:8080/TCP\nI0416 04:20:23.773115 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:23.811095 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.134192ms\"\nW0416 04:20:25.096593 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing5t7tp\nW0416 04:20:25.331066 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrf92q\nW0416 04:20:25.566111 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:26.982419 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:27.452099 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:27.688196 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:28.394825 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing5t7tp\nW0416 04:20:28.396774 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrf92q\nI0416 04:20:28.734235 1 service.go:301] Service proxy-8835/proxy-service-gl6lf updated: 4 ports\nI0416 04:20:28.734600 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:portname1\" at 100.71.171.178:80/TCP\nI0416 04:20:28.734700 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:portname2\" at 100.71.171.178:81/TCP\nI0416 04:20:28.734768 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:tlsportname1\" at 100.71.171.178:443/TCP\nI0416 04:20:28.734825 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:tlsportname2\" at 100.71.171.178:444/TCP\nI0416 04:20:28.735630 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:28.774994 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.40001ms\"\nI0416 04:20:28.775122 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:28.800652 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"25.541945ms\"\nI0416 04:20:29.980708 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:30.010274 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.598002ms\"\nI0416 04:20:31.010803 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:31.051241 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.494505ms\"\nI0416 04:20:32.052554 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:32.081864 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.559654ms\"\nI0416 04:20:38.422808 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:38.449446 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.682342ms\"\nI0416 04:20:39.721335 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:39.758264 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.987576ms\"\nI0416 04:20:39.758422 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:39.787596 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.299349ms\"\nI0416 04:20:42.908821 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:42.957494 1 service.go:301] Service services-5694/hairpin-test updated: 0 ports\nI0416 04:20:42.959238 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.482196ms\"\nI0416 04:20:42.959266 1 service.go:441] Removing service port \"services-5694/hairpin-test\"\nI0416 04:20:42.959327 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:42.998347 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.073024ms\"\nI0416 04:20:43.999262 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:44.048157 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.913704ms\"\nI0416 04:20:45.297310 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:45.356609 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"59.31617ms\"\nI0416 04:20:45.394507 1 service.go:301] Service proxy-8835/proxy-service-gl6lf updated: 0 ports\nI0416 04:20:46.357680 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:portname1\"\nI0416 04:20:46.357706 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:portname2\"\nI0416 04:20:46.357711 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:tlsportname1\"\nI0416 04:20:46.357718 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:tlsportname2\"\nI0416 04:20:46.357764 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:46.383851 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.174552ms\"\nI0416 04:21:31.558643 1 service.go:301] Service services-538/test-service-6bpqw updated: 1 ports\nI0416 04:21:31.558793 1 service.go:416] Adding new service port \"services-538/test-service-6bpqw:http\" at 100.66.168.152:80/TCP\nI0416 04:21:31.558855 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:31.600361 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.593487ms\"\nI0416 04:21:32.270562 1 service.go:301] Service services-538/test-service-6bpqw updated: 1 ports\nI0416 04:21:32.270630 1 service.go:418] Updating existing service port \"services-538/test-service-6bpqw:http\" at 100.66.168.152:80/TCP\nI0416 04:21:32.270670 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:32.301629 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.020386ms\"\nI0416 04:21:33.933299 1 service.go:301] Service services-538/test-service-6bpqw updated: 0 ports\nI0416 04:21:33.933493 1 service.go:441] Removing service port \"services-538/test-service-6bpqw:http\"\nI0416 04:21:33.933582 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:33.982255 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.754466ms\"\nI0416 04:21:44.983906 1 service.go:301] Service svc-latency-3981/latency-svc-49w6x updated: 1 ports\nI0416 04:21:44.984110 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-49w6x\" at 100.64.111.15:80/TCP\nI0416 04:21:44.984244 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:45.041963 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"57.860774ms\"\nI0416 04:21:45.042099 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:45.075893 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.891485ms\"\nI0416 04:21:45.239130 1 service.go:301] Service svc-latency-3981/latency-svc-fglqx updated: 1 ports\nI0416 04:21:45.245861 1 service.go:301] Service svc-latency-3981/latency-svc-p9h96 updated: 1 ports\nI0416 04:21:45.252073 1 service.go:301] Service svc-latency-3981/latency-svc-qrm7g updated: 1 ports\nI0416 04:21:45.258956 1 service.go:301] Service svc-latency-3981/latency-svc-qmd8f updated: 1 ports\nI0416 04:21:45.277392 1 service.go:301] Service svc-latency-3981/latency-svc-fw4ht updated: 1 ports\nI0416 04:21:45.473746 1 service.go:301] Service svc-latency-3981/latency-svc-2x2b2 updated: 1 ports\nI0416 04:21:45.497805 1 service.go:301] Service svc-latency-3981/latency-svc-c8p2l updated: 1 ports\nI0416 04:21:45.503070 1 service.go:301] Service svc-latency-3981/latency-svc-vcpfc updated: 1 ports\nI0416 04:21:45.509297 1 service.go:301] Service svc-latency-3981/latency-svc-xjbfs updated: 1 ports\nI0416 04:21:45.518369 1 service.go:301] Service svc-latency-3981/latency-svc-q7xh9 updated: 1 ports\nI0416 04:21:45.528958 1 service.go:301] Service svc-latency-3981/latency-svc-4sxrn updated: 1 ports\nI0416 04:21:45.534505 1 service.go:301] Service svc-latency-3981/latency-svc-pkvbd updated: 1 ports\nI0416 04:21:45.541294 1 service.go:301] Service svc-latency-3981/latency-svc-24kvw updated: 1 ports\nI0416 04:21:45.546127 1 service.go:301] Service svc-latency-3981/latency-svc-x2k7p updated: 1 ports\nI0416 04:21:45.557278 1 service.go:301] Service svc-latency-3981/latency-svc-p7zbc updated: 1 ports\nI0416 04:21:45.557578 1 service.go:301] Service svc-latency-3981/latency-svc-2lf5w updated: 1 ports\nI0416 04:21:45.564436 1 service.go:301] Service svc-latency-3981/latency-svc-r7mp2 updated: 1 ports\nI0416 04:21:45.571467 1 service.go:301] Service svc-latency-3981/latency-svc-dtb4m updated: 1 ports\nI0416 04:21:45.577243 1 service.go:301] Service svc-latency-3981/latency-svc-lhtjh updated: 1 ports\nI0416 04:21:45.583594 1 service.go:301] Service svc-latency-3981/latency-svc-6txzc updated: 1 ports\nI0416 04:21:45.733239 1 service.go:301] Service svc-latency-3981/latency-svc-j88m6 updated: 1 ports\nI0416 04:21:45.756762 1 service.go:301] Service svc-latency-3981/latency-svc-kw5hl updated: 1 ports\nI0416 04:21:45.767762 1 service.go:301] Service svc-latency-3981/latency-svc-d7rjl updated: 1 ports\nI0416 04:21:45.769606 1 service.go:301] Service svc-latency-3981/latency-svc-d9q85 updated: 1 ports\nI0416 04:21:45.776027 1 service.go:301] Service svc-latency-3981/latency-svc-qgpmd updated: 1 ports\nI0416 04:21:45.789426 1 service.go:301] Service svc-latency-3981/latency-svc-gdvbv updated: 1 ports\nI0416 04:21:45.798886 1 service.go:301] Service svc-latency-3981/latency-svc-kwzpv updated: 1 ports\nI0416 04:21:45.810341 1 service.go:301] Service svc-latency-3981/latency-svc-6kkhg updated: 1 ports\nI0416 04:21:45.816035 1 service.go:301] Service svc-latency-3981/latency-svc-tnsf2 updated: 1 ports\nI0416 04:21:45.825315 1 service.go:301] Service svc-latency-3981/latency-svc-g85nc updated: 1 ports\nI0416 04:21:45.838494 1 service.go:301] Service svc-latency-3981/latency-svc-x88zw updated: 1 ports\nI0416 04:21:45.845451 1 service.go:301] Service svc-latency-3981/latency-svc-7vw8t updated: 1 ports\nI0416 04:21:45.860583 1 service.go:301] Service svc-latency-3981/latency-svc-4mdx5 updated: 1 ports\nI0416 04:21:45.972040 1 service.go:301] Service svc-latency-3981/latency-svc-n2tgb updated: 1 ports\nI0416 04:21:45.977766 1 service.go:301] Service svc-latency-3981/latency-svc-r7jf4 updated: 1 ports\nI0416 04:21:45.987568 1 service.go:301] Service svc-latency-3981/latency-svc-44wwt updated: 1 ports\nI0416 04:21:45.987869 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7vw8t\" at 100.71.59.41:80/TCP\nI0416 04:21:45.987900 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qrm7g\" at 100.69.205.223:80/TCP\nI0416 04:21:45.987937 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qmd8f\" at 100.70.201.118:80/TCP\nI0416 04:21:45.987952 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-c8p2l\" at 100.67.55.209:80/TCP\nI0416 04:21:45.987965 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xjbfs\" at 100.67.123.74:80/TCP\nI0416 04:21:45.987978 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-j88m6\" at 100.67.135.125:80/TCP\nI0416 04:21:45.988012 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fglqx\" at 100.65.93.75:80/TCP\nI0416 04:21:45.988026 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4sxrn\" at 100.66.25.231:80/TCP\nI0416 04:21:45.988041 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-p9h96\" at 100.67.33.69:80/TCP\nI0416 04:21:45.988060 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-24kvw\" at 100.71.193.194:80/TCP\nI0416 04:21:45.988098 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-p7zbc\" at 100.70.179.25:80/TCP\nI0416 04:21:45.988112 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g85nc\" at 100.65.75.242:80/TCP\nI0416 04:21:45.988121 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4mdx5\" at 100.65.65.97:80/TCP\nI0416 04:21:45.988132 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kw5hl\" at 100.65.168.41:80/TCP\nI0416 04:21:45.988144 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d9q85\" at 100.66.226.120:80/TCP\nI0416 04:21:45.988171 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vcpfc\" at 100.70.202.77:80/TCP\nI0416 04:21:45.988196 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2lf5w\" at 100.66.247.166:80/TCP\nI0416 04:21:45.988209 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dtb4m\" at 100.69.228.146:80/TCP\nI0416 04:21:45.988220 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-lhtjh\" at 100.70.10.168:80/TCP\nI0416 04:21:45.988237 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6txzc\" at 100.65.135.3:80/TCP\nI0416 04:21:45.988250 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2x2b2\" at 100.69.96.72:80/TCP\nI0416 04:21:45.988278 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-r7mp2\" at 100.65.132.22:80/TCP\nI0416 04:21:45.988296 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-gdvbv\" at 100.67.231.202:80/TCP\nI0416 04:21:45.988312 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-tnsf2\" at 100.71.133.105:80/TCP\nI0416 04:21:45.988325 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fw4ht\" at 100.71.233.83:80/TCP\nI0416 04:21:45.988408 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q7xh9\" at 100.70.95.236:80/TCP\nI0416 04:21:45.988431 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kwzpv\" at 100.65.158.70:80/TCP\nI0416 04:21:45.988446 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-r7jf4\" at 100.69.233.115:80/TCP\nI0416 04:21:45.988477 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-n2tgb\" at 100.67.18.53:80/TCP\nI0416 04:21:45.988486 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pkvbd\" at 100.71.146.204:80/TCP\nI0416 04:21:45.988507 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-x2k7p\" at 100.68.94.179:80/TCP\nI0416 04:21:45.988536 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d7rjl\" at 100.69.106.86:80/TCP\nI0416 04:21:45.988549 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qgpmd\" at 100.64.171.223:80/TCP\nI0416 04:21:45.988562 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-x88zw\" at 100.70.8.125:80/TCP\nI0416 04:21:45.988574 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6kkhg\" at 100.68.35.41:80/TCP\nI0416 04:21:45.988586 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-44wwt\" at 100.64.61.245:80/TCP\nI0416 04:21:45.988914 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:46.013365 1 service.go:301] Service svc-latency-3981/latency-svc-xf99p updated: 1 ports\nI0416 04:21:46.042453 1 service.go:301] Service svc-latency-3981/latency-svc-qxhct updated: 1 ports\nI0416 04:21:46.054814 1 service.go:301] Service svc-latency-3981/latency-svc-kr2r7 updated: 1 ports\nI0416 04:21:46.062071 1 service.go:301] Service svc-latency-3981/latency-svc-9jdjb updated: 1 ports\nI0416 04:21:46.062651 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"75.047248ms\"\nI0416 04:21:46.070379 1 service.go:301] Service svc-latency-3981/latency-svc-msttv updated: 1 ports\nI0416 04:21:46.076172 1 service.go:301] Service svc-latency-3981/latency-svc-bkp2m updated: 1 ports\nI0416 04:21:46.085694 1 service.go:301] Service svc-latency-3981/latency-svc-rdtjw updated: 1 ports\nI0416 04:21:46.091638 1 service.go:301] Service svc-latency-3981/latency-svc-xrnwp updated: 1 ports\nI0416 04:21:46.096135 1 service.go:301] Service svc-latency-3981/latency-svc-7mgwx updated: 1 ports\nI0416 04:21:46.104537 1 service.go:301] Service svc-latency-3981/latency-svc-g248w updated: 1 ports\nI0416 04:21:46.111027 1 service.go:301] Service svc-latency-3981/latency-svc-xct4m updated: 1 ports\nI0416 04:21:46.116546 1 service.go:301] Service svc-latency-3981/latency-svc-4dzlb updated: 1 ports\nI0416 04:21:46.225863 1 service.go:301] Service svc-latency-3981/latency-svc-9rjdq updated: 1 ports\nI0416 04:21:46.231048 1 service.go:301] Service svc-latency-3981/latency-svc-bxzdt updated: 1 ports\nI0416 04:21:46.243639 1 service.go:301] Service svc-latency-3981/latency-svc-4wd85 updated: 1 ports\nI0416 04:21:46.290403 1 service.go:301] Service svc-latency-3981/latency-svc-46zh6 updated: 1 ports\nI0416 04:21:46.299446 1 service.go:301] Service svc-latency-3981/latency-svc-8jtd7 updated: 1 ports\nI0416 04:21:46.309042 1 service.go:301] Service svc-latency-3981/latency-svc-vhd5k updated: 1 ports\nI0416 04:21:46.312785 1 service.go:301] Service svc-latency-3981/latency-svc-wd4zt updated: 1 ports\nI0416 04:21:46.318903 1 service.go:301] Service svc-latency-3981/latency-svc-wv8nx updated: 1 ports\nI0416 04:21:46.324444 1 service.go:301] Service svc-latency-3981/latency-svc-2w5ct updated: 1 ports\nI0416 04:21:46.330962 1 service.go:301] Service svc-latency-3981/latency-svc-zpx4h updated: 1 ports\nI0416 04:21:46.338123 1 service.go:301] Service svc-latency-3981/latency-svc-kjg25 updated: 1 ports\nI0416 04:21:46.364919 1 service.go:301] Service svc-latency-3981/latency-svc-5t6sm updated: 1 ports\nI0416 04:21:46.415281 1 service.go:301] Service svc-latency-3981/latency-svc-9fnrp updated: 1 ports\nI0416 04:21:46.466467 1 service.go:301] Service svc-latency-3981/latency-svc-wx822 updated: 1 ports\nI0416 04:21:46.531045 1 service.go:301] Service svc-latency-3981/latency-svc-cz7hv updated: 1 ports\nI0416 04:21:46.566999 1 service.go:301] Service svc-latency-3981/latency-svc-fzc9p updated: 1 ports\nI0416 04:21:46.614861 1 service.go:301] Service svc-latency-3981/latency-svc-d2xrl updated: 1 ports\nI0416 04:21:46.666762 1 service.go:301] Service svc-latency-3981/latency-svc-xjlnp updated: 1 ports\nI0416 04:21:46.715531 1 service.go:301] Service svc-latency-3981/latency-svc-56brr updated: 1 ports\nI0416 04:21:46.772610 1 service.go:301] Service svc-latency-3981/latency-svc-jrn8c updated: 1 ports\nI0416 04:21:46.816565 1 service.go:301] Service svc-latency-3981/latency-svc-fbqps updated: 1 ports\nI0416 04:21:46.865720 1 service.go:301] Service svc-latency-3981/latency-svc-9nx8j updated: 1 ports\nI0416 04:21:46.916592 1 service.go:301] Service svc-latency-3981/latency-svc-qrjc4 updated: 1 ports\nI0416 04:21:46.967659 1 service.go:301] Service svc-latency-3981/latency-svc-ggwzr updated: 1 ports\nI0416 04:21:47.018993 1 service.go:301] Service svc-latency-3981/latency-svc-m9cps updated: 1 ports\nI0416 04:21:47.019236 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wv8nx\" at 100.67.41.194:80/TCP\nI0416 04:21:47.019363 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d2xrl\" at 100.71.159.33:80/TCP\nI0416 04:21:47.019449 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-jrn8c\" at 100.67.197.2:80/TCP\nI0416 04:21:47.019526 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xct4m\" at 100.70.181.194:80/TCP\nI0416 04:21:47.019600 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4wd85\" at 100.68.132.217:80/TCP\nI0416 04:21:47.019679 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-46zh6\" at 100.70.80.169:80/TCP\nI0416 04:21:47.019939 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-cz7hv\" at 100.68.180.60:80/TCP\nI0416 04:21:47.020039 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5t6sm\" at 100.66.151.119:80/TCP\nI0416 04:21:47.020121 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-m9cps\" at 100.70.127.202:80/TCP\nI0416 04:21:47.020192 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9jdjb\" at 100.69.169.195:80/TCP\nI0416 04:21:47.020253 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-msttv\" at 100.67.212.159:80/TCP\nI0416 04:21:47.020318 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-bkp2m\" at 100.70.196.39:80/TCP\nI0416 04:21:47.020382 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-rdtjw\" at 100.70.121.175:80/TCP\nI0416 04:21:47.020447 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fbqps\" at 100.70.162.54:80/TCP\nI0416 04:21:47.020539 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7mgwx\" at 100.65.202.29:80/TCP\nI0416 04:21:47.020603 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vhd5k\" at 100.67.244.172:80/TCP\nI0416 04:21:47.020664 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xjlnp\" at 100.65.150.47:80/TCP\nI0416 04:21:47.020731 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-56brr\" at 100.71.233.229:80/TCP\nI0416 04:21:47.020797 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9rjdq\" at 100.69.160.255:80/TCP\nI0416 04:21:47.020872 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-bxzdt\" at 100.67.116.74:80/TCP\nI0416 04:21:47.020940 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8jtd7\" at 100.66.160.170:80/TCP\nI0416 04:21:47.021019 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-ggwzr\" at 100.70.21.218:80/TCP\nI0416 04:21:47.021263 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xf99p\" at 100.65.77.139:80/TCP\nI0416 04:21:47.021351 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xrnwp\" at 100.68.210.98:80/TCP\nI0416 04:21:47.021422 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g248w\" at 100.66.28.205:80/TCP\nI0416 04:21:47.021475 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4dzlb\" at 100.65.7.44:80/TCP\nI0416 04:21:47.021554 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9nx8j\" at 100.70.14.217:80/TCP\nI0416 04:21:47.021644 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qrjc4\" at 100.66.193.174:80/TCP\nI0416 04:21:47.021723 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qxhct\" at 100.67.178.161:80/TCP\nI0416 04:21:47.021786 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wd4zt\" at 100.71.167.7:80/TCP\nI0416 04:21:47.021850 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kjg25\" at 100.68.14.101:80/TCP\nI0416 04:21:47.021922 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wx822\" at 100.66.228.104:80/TCP\nI0416 04:21:47.021996 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kr2r7\" at 100.65.210.191:80/TCP\nI0416 04:21:47.022064 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2w5ct\" at 100.70.102.178:80/TCP\nI0416 04:21:47.022122 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zpx4h\" at 100.69.118.238:80/TCP\nI0416 04:21:47.022178 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9fnrp\" at 100.70.105.191:80/TCP\nI0416 04:21:47.022228 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fzc9p\" at 100.64.130.130:80/TCP\nI0416 04:21:47.023386 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:47.059696 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.468563ms\"\nI0416 04:21:47.068611 1 service.go:301] Service svc-latency-3981/latency-svc-k74n8 updated: 1 ports\nI0416 04:21:47.116956 1 service.go:301] Service svc-latency-3981/latency-svc-8f5zr updated: 1 ports\nI0416 04:21:47.164075 1 service.go:301] Service svc-latency-3981/latency-svc-qx9r9 updated: 1 ports\nI0416 04:21:47.218399 1 service.go:301] Service svc-latency-3981/latency-svc-czjpm updated: 1 ports\nI0416 04:21:47.278108 1 service.go:301] Service svc-latency-3981/latency-svc-z5ghz updated: 1 ports\nI0416 04:21:47.325398 1 service.go:301] Service svc-latency-3981/latency-svc-mj5h5 updated: 1 ports\nI0416 04:21:47.375721 1 service.go:301] Service svc-latency-3981/latency-svc-l5k7g updated: 1 ports\nI0416 04:21:47.417691 1 service.go:301] Service svc-latency-3981/latency-svc-xkfsd updated: 1 ports\nI0416 04:21:47.466546 1 service.go:301] Service svc-latency-3981/latency-svc-fpts9 updated: 1 ports\nI0416 04:21:47.523189 1 service.go:301] Service svc-latency-3981/latency-svc-kk9qz updated: 1 ports\nI0416 04:21:47.571681 1 service.go:301] Service svc-latency-3981/latency-svc-444nw updated: 1 ports\nI0416 04:21:47.654025 1 service.go:301] Service svc-latency-3981/latency-svc-xkcrs updated: 1 ports\nI0416 04:21:47.678177 1 service.go:301] Service svc-latency-3981/latency-svc-7fk9l updated: 1 ports\nI0416 04:21:47.732951 1 service.go:301] Service svc-latency-3981/latency-svc-8v8st updated: 1 ports\nI0416 04:21:47.768584 1 service.go:301] Service svc-latency-3981/latency-svc-hn2xn updated: 1 ports\nI0416 04:21:47.816683 1 service.go:301] Service svc-latency-3981/latency-svc-zvshw updated: 1 ports\nI0416 04:21:47.883443 1 service.go:301] Service svc-latency-3981/latency-svc-5dll9 updated: 1 ports\nI0416 04:21:47.922429 1 service.go:301] Service svc-latency-3981/latency-svc-mk26h updated: 1 ports\nI0416 04:21:47.967814 1 service.go:301] Service svc-latency-3981/latency-svc-nmb8h updated: 1 ports\nI0416 04:21:48.015655 1 service.go:301] Service svc-latency-3981/latency-svc-6kn8x updated: 1 ports\nI0416 04:21:48.015882 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7fk9l\" at 100.71.219.49:80/TCP\nI0416 04:21:48.015903 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zvshw\" at 100.67.82.79:80/TCP\nI0416 04:21:48.015913 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-nmb8h\" at 100.65.199.81:80/TCP\nI0416 04:21:48.015974 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-czjpm\" at 100.68.130.85:80/TCP\nI0416 04:21:48.015986 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-444nw\" at 100.68.152.16:80/TCP\nI0416 04:21:48.015999 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8v8st\" at 100.70.248.203:80/TCP\nI0416 04:21:48.016021 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5dll9\" at 100.66.248.10:80/TCP\nI0416 04:21:48.016053 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8f5zr\" at 100.69.43.32:80/TCP\nI0416 04:21:48.016088 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-l5k7g\" at 100.64.147.167:80/TCP\nI0416 04:21:48.016102 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xkfsd\" at 100.67.246.40:80/TCP\nI0416 04:21:48.016112 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fpts9\" at 100.69.103.84:80/TCP\nI0416 04:21:48.016124 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kk9qz\" at 100.70.244.22:80/TCP\nI0416 04:21:48.016157 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xkcrs\" at 100.64.51.110:80/TCP\nI0416 04:21:48.016167 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hn2xn\" at 100.64.40.137:80/TCP\nI0416 04:21:48.016208 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6kn8x\" at 100.66.201.0:80/TCP\nI0416 04:21:48.016218 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qx9r9\" at 100.71.233.134:80/TCP\nI0416 04:21:48.016231 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-mj5h5\" at 100.70.184.216:80/TCP\nI0416 04:21:48.016246 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-mk26h\" at 100.65.114.132:80/TCP\nI0416 04:21:48.016257 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-k74n8\" at 100.64.232.221:80/TCP\nI0416 04:21:48.016267 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-z5ghz\" at 100.69.114.121:80/TCP\nI0416 04:21:48.016498 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:48.060085 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"44.22774ms\"\nI0416 04:21:48.090816 1 service.go:301] Service svc-latency-3981/latency-svc-l6j5h updated: 1 ports\nI0416 04:21:48.115445 1 service.go:301] Service svc-latency-3981/latency-svc-7gg2n updated: 1 ports\nI0416 04:21:48.166946 1 service.go:301] Service svc-latency-3981/latency-svc-sjbwr updated: 1 ports\nI0416 04:21:48.215984 1 service.go:301] Service svc-latency-3981/latency-svc-tfgb8 updated: 1 ports\nI0416 04:21:48.268236 1 service.go:301] Service svc-latency-3981/latency-svc-w8pmh updated: 1 ports\nI0416 04:21:48.326269 1 service.go:301] Service svc-latency-3981/latency-svc-pjccn updated: 1 ports\nI0416 04:21:48.367499 1 service.go:301] Service svc-latency-3981/latency-svc-5jdjd updated: 1 ports\nI0416 04:21:48.418780 1 service.go:301] Service svc-latency-3981/latency-svc-xmxvf updated: 1 ports\nI0416 04:21:48.471390 1 service.go:301] Service svc-latency-3981/latency-svc-g947j updated: 1 ports\nI0416 04:21:48.531694 1 service.go:301] Service svc-latency-3981/latency-svc-q4kv2 updated: 1 ports\nI0416 04:21:48.566598 1 service.go:301] Service svc-latency-3981/latency-svc-hkppp updated: 1 ports\nI0416 04:21:48.615662 1 service.go:301] Service svc-latency-3981/latency-svc-t7sb8 updated: 1 ports\nI0416 04:21:48.667783 1 service.go:301] Service svc-latency-3981/latency-svc-qjn2p updated: 1 ports\nI0416 04:21:48.716567 1 service.go:301] Service svc-latency-3981/latency-svc-gz4hw updated: 1 ports\nI0416 04:21:48.766814 1 service.go:301] Service svc-latency-3981/latency-svc-jqc6f updated: 1 ports\nI0416 04:21:48.816297 1 service.go:301] Service svc-latency-3981/latency-svc-rqlv9 updated: 1 ports\nI0416 04:21:48.865652 1 service.go:301] Service svc-latency-3981/latency-svc-4bncr updated: 1 ports\nI0416 04:21:48.916535 1 service.go:301] Service svc-latency-3981/latency-svc-q4257 updated: 1 ports\nI0416 04:21:48.966717 1 service.go:301] Service svc-latency-3981/latency-svc-t9ht8 updated: 1 ports\nI0416 04:21:49.015454 1 service.go:301] Service svc-latency-3981/latency-svc-tmz8w updated: 1 ports\nI0416 04:21:49.015904 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-t9ht8\" at 100.69.194.95:80/TCP\nI0416 04:21:49.015984 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sjbwr\" at 100.68.37.97:80/TCP\nI0416 04:21:49.016042 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pjccn\" at 100.70.111.177:80/TCP\nI0416 04:21:49.016091 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xmxvf\" at 100.70.101.110:80/TCP\nI0416 04:21:49.016151 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-t7sb8\" at 100.70.201.185:80/TCP\nI0416 04:21:49.016168 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q4257\" at 100.69.90.129:80/TCP\nI0416 04:21:49.016231 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7gg2n\" at 100.65.9.144:80/TCP\nI0416 04:21:49.016257 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-tfgb8\" at 100.68.232.109:80/TCP\nI0416 04:21:49.016308 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5jdjd\" at 100.71.87.192:80/TCP\nI0416 04:21:49.016349 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-gz4hw\" at 100.70.28.55:80/TCP\nI0416 04:21:49.016395 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-tmz8w\" at 100.70.239.15:80/TCP\nI0416 04:21:49.016412 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g947j\" at 100.66.195.56:80/TCP\nI0416 04:21:49.016498 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q4kv2\" at 100.66.130.172:80/TCP\nI0416 04:21:49.016538 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qjn2p\" at 100.71.157.148:80/TCP\nI0416 04:21:49.016587 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-rqlv9\" at 100.68.239.189:80/TCP\nI0416 04:21:49.016602 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-l6j5h\" at 100.68.140.150:80/TCP\nI0416 04:21:49.016682 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-w8pmh\" at 100.68.69.29:80/TCP\nI0416 04:21:49.016758 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hkppp\" at 100.66.89.202:80/TCP\nI0416 04:21:49.016774 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-jqc6f\" at 100.66.175.64:80/TCP\nI0416 04:21:49.016845 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4bncr\" at 100.65.42.13:80/TCP\nI0416 04:21:49.017100 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:49.062451 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"46.540775ms\"\nI0416 04:21:49.067556 1 service.go:301] Service svc-latency-3981/latency-svc-n8knn updated: 1 ports\nI0416 04:21:49.117989 1 service.go:301] Service svc-latency-3981/latency-svc-z7k5t updated: 1 ports\nI0416 04:21:49.165601 1 service.go:301] Service svc-latency-3981/latency-svc-qkpxm updated: 1 ports\nI0416 04:21:49.216417 1 service.go:301] Service svc-latency-3981/latency-svc-dcljc updated: 1 ports\nI0416 04:21:49.268172 1 service.go:301] Service svc-latency-3981/latency-svc-f299t updated: 1 ports\nI0416 04:21:49.316769 1 service.go:301] Service svc-latency-3981/latency-svc-5d7hs updated: 1 ports\nI0416 04:21:49.367158 1 service.go:301] Service svc-latency-3981/latency-svc-v9ngm updated: 1 ports\nI0416 04:21:49.414827 1 service.go:301] Service svc-latency-3981/latency-svc-wb26g updated: 1 ports\nI0416 04:21:49.468087 1 service.go:301] Service svc-latency-3981/latency-svc-z48q8 updated: 1 ports\nI0416 04:21:49.519204 1 service.go:301] Service svc-latency-3981/latency-svc-wc4b7 updated: 1 ports\nI0416 04:21:49.568256 1 service.go:301] Service svc-latency-3981/latency-svc-6pnn8 updated: 1 ports\nI0416 04:21:49.622606 1 service.go:301] Service svc-latency-3981/latency-svc-zwvpj updated: 1 ports\nI0416 04:21:49.665214 1 service.go:301] Service svc-latency-3981/latency-svc-g6dmz updated: 1 ports\nI0416 04:21:49.720151 1 service.go:301] Service svc-latency-3981/latency-svc-q6x9d updated: 1 ports\nI0416 04:21:49.767300 1 service.go:301] Service svc-latency-3981/latency-svc-9422l updated: 1 ports\nI0416 04:21:49.820316 1 service.go:301] Service svc-latency-3981/latency-svc-pq8ml updated: 1 ports\nI0416 04:21:49.880246 1 service.go:301] Service svc-latency-3981/latency-svc-snpps updated: 1 ports\nI0416 04:21:49.915206 1 service.go:301] Service svc-latency-3981/latency-svc-xfghb updated: 1 ports\nI0416 04:21:50.016784 1 service.go:301] Service svc-latency-3981/latency-svc-r8wlh updated: 1 ports\nI0416 04:21:50.017043 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xfghb\" at 100.71.168.189:80/TCP\nI0416 04:21:50.017181 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-n8knn\" at 100.66.222.224:80/TCP\nI0416 04:21:50.017274 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-v9ngm\" at 100.71.250.70:80/TCP\nI0416 04:21:50.017387 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wc4b7\" at 100.69.253.43:80/TCP\nI0416 04:21:50.017465 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6pnn8\" at 100.69.101.241:80/TCP\nI0416 04:21:50.017651 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zwvpj\" at 100.68.103.200:80/TCP\nI0416 04:21:50.017771 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q6x9d\" at 100.66.159.156:80/TCP\nI0416 04:21:50.017841 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pq8ml\" at 100.65.243.100:80/TCP\nI0416 04:21:50.017854 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-z7k5t\" at 100.64.195.174:80/TCP\nI0416 04:21:50.017864 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qkpxm\" at 100.71.99.171:80/TCP\nI0416 04:21:50.017874 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5d7hs\" at 100.64.119.193:80/TCP\nI0416 04:21:50.017905 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-snpps\" at 100.64.12.16:80/TCP\nI0416 04:21:50.017928 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-f299t\" at 100.66.229.26:80/TCP\nI0416 04:21:50.017974 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wb26g\" at 100.69.67.110:80/TCP\nI0416 04:21:50.018020 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-z48q8\" at 100.68.235.122:80/TCP\nI0416 04:21:50.018033 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g6dmz\" at 100.71.129.123:80/TCP\nI0416 04:21:50.018049 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9422l\" at 100.71.8.182:80/TCP\nI0416 04:21:50.018072 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dcljc\" at 100.70.81.56:80/TCP\nI0416 04:21:50.018099 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-r8wlh\" at 100.65.174.205:80/TCP\nI0416 04:21:50.018346 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:50.068603 1 service.go:301] Service svc-latency-3981/latency-svc-86r5n updated: 1 ports\nI0416 04:21:50.090179 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"73.168088ms\"\nI0416 04:21:50.120584 1 service.go:301] Service svc-latency-3981/latency-svc-k6gmx updated: 1 ports\nI0416 04:21:50.172756 1 service.go:301] Service svc-latency-3981/latency-svc-8kwfl updated: 1 ports\nI0416 04:21:50.214681 1 service.go:301] Service svc-latency-3981/latency-svc-g6t7j updated: 1 ports\nI0416 04:21:50.267340 1 service.go:301] Service svc-latency-3981/latency-svc-hsjnr updated: 1 ports\nI0416 04:21:50.316138 1 service.go:301] Service svc-latency-3981/latency-svc-g2th9 updated: 1 ports\nI0416 04:21:50.364954 1 service.go:301] Service svc-latency-3981/latency-svc-pgcfp updated: 1 ports\nI0416 04:21:50.426033 1 service.go:301] Service svc-latency-3981/latency-svc-h9jzs updated: 1 ports\nI0416 04:21:50.463897 1 service.go:301] Service svc-latency-3981/latency-svc-sx4wq updated: 1 ports\nI0416 04:21:50.520076 1 service.go:301] Service svc-latency-3981/latency-svc-6d8r4 updated: 1 ports\nI0416 04:21:50.615765 1 service.go:301] Service svc-latency-3981/latency-svc-gmxj9 updated: 1 ports\nI0416 04:21:50.681471 1 service.go:301] Service svc-latency-3981/latency-svc-v28nm updated: 1 ports\nI0416 04:21:50.716673 1 service.go:301] Service svc-latency-3981/latency-svc-9w255 updated: 1 ports\nI0416 04:21:50.772603 1 service.go:301] Service svc-latency-3981/latency-svc-zbd7h updated: 1 ports\nI0416 04:21:50.814981 1 service.go:301] Service svc-latency-3981/latency-svc-fcc8g updated: 1 ports\nI0416 04:21:50.867582 1 service.go:301] Service svc-latency-3981/latency-svc-xh99l updated: 1 ports\nI0416 04:21:50.923051 1 service.go:301] Service svc-latency-3981/latency-svc-knbng updated: 1 ports\nI0416 04:21:50.983551 1 service.go:301] Service svc-latency-3981/latency-svc-jnmlb updated: 1 ports\nI0416 04:21:51.016143 1 service.go:301] Service svc-latency-3981/latency-svc-wzbj8 updated: 1 ports\nI0416 04:21:51.016660 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-v28nm\" at 100.64.4.47:80/TCP\nI0416 04:21:51.016757 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xh99l\" at 100.64.89.114:80/TCP\nI0416 04:21:51.016811 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sx4wq\" at 100.68.58.49:80/TCP\nI0416 04:21:51.016825 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zbd7h\" at 100.66.123.227:80/TCP\nI0416 04:21:51.016882 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-86r5n\" at 100.65.5.63:80/TCP\nI0416 04:21:51.016895 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hsjnr\" at 100.69.249.163:80/TCP\nI0416 04:21:51.016906 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-knbng\" at 100.65.215.160:80/TCP\nI0416 04:21:51.016940 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wzbj8\" at 100.64.24.129:80/TCP\nI0416 04:21:51.016970 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g2th9\" at 100.68.46.251:80/TCP\nI0416 04:21:51.017012 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6d8r4\" at 100.65.120.24:80/TCP\nI0416 04:21:51.017025 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g6t7j\" at 100.65.227.249:80/TCP\nI0416 04:21:51.017044 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pgcfp\" at 100.66.115.73:80/TCP\nI0416 04:21:51.017095 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-h9jzs\" at 100.64.124.21:80/TCP\nI0416 04:21:51.017145 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-gmxj9\" at 100.68.133.11:80/TCP\nI0416 04:21:51.017190 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9w255\" at 100.68.235.37:80/TCP\nI0416 04:21:51.017251 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fcc8g\" at 100.68.163.129:80/TCP\nI0416 04:21:51.017302 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-k6gmx\" at 100.67.45.50:80/TCP\nI0416 04:21:51.017315 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8kwfl\" at 100.64.34.171:80/TCP\nI0416 04:21:51.017349 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-jnmlb\" at 100.69.28.136:80/TCP\nI0416 04:21:51.017605 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:51.072715 1 service.go:301] Service svc-latency-3981/latency-svc-dks2n updated: 1 ports\nI0416 04:21:51.085314 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"68.65847ms\"\nI0416 04:21:51.124609 1 service.go:301] Service svc-latency-3981/latency-svc-qp4nv updated: 1 ports\nI0416 04:21:51.168217 1 service.go:301] Service svc-latency-3981/latency-svc-95lzs updated: 1 ports\nI0416 04:21:51.228371 1 service.go:301] Service svc-latency-3981/latency-svc-zhk72 updated: 1 ports\nI0416 04:21:51.267110 1 service.go:301] Service svc-latency-3981/latency-svc-j8895 updated: 1 ports\nI0416 04:21:51.317563 1 service.go:301] Service svc-latency-3981/latency-svc-k2chp updated: 1 ports\nI0416 04:21:51.372977 1 service.go:301] Service svc-latency-3981/latency-svc-92phz updated: 1 ports\nI0416 04:21:51.427027 1 service.go:301] Service svc-latency-3981/latency-svc-dr54h updated: 1 ports\nI0416 04:21:51.467681 1 service.go:301] Service svc-latency-3981/latency-svc-h5rwt updated: 1 ports\nI0416 04:21:51.552426 1 service.go:301] Service svc-latency-3981/latency-svc-c627r updated: 1 ports\nI0416 04:21:51.590017 1 service.go:301] Service svc-latency-3981/latency-svc-bk9mt updated: 1 ports\nI0416 04:21:51.632600 1 service.go:301] Service svc-latency-3981/latency-svc-7wnrp updated: 1 ports\nI0416 04:21:51.671284 1 service.go:301] Service svc-latency-3981/latency-svc-rkxrw updated: 1 ports\nI0416 04:21:51.715701 1 service.go:301] Service svc-latency-3981/latency-svc-2k6bk updated: 1 ports\nI0416 04:21:51.797609 1 service.go:301] Service svc-latency-3981/latency-svc-wp8xr updated: 1 ports\nI0416 04:21:51.837770 1 service.go:301] Service svc-latency-3981/latency-svc-lrd6g updated: 1 ports\nI0416 04:21:51.873664 1 service.go:301] Service svc-latency-3981/latency-svc-2jvgg updated: 1 ports\nI0416 04:21:51.921251 1 service.go:301] Service svc-latency-3981/latency-svc-vv8q4 updated: 1 ports\nI0416 04:21:51.969619 1 service.go:301] Service svc-latency-3981/latency-svc-6drnp updated: 1 ports\nI0416 04:21:52.015313 1 service.go:301] Service svc-latency-3981/latency-svc-wvw6k updated: 1 ports\nI0416 04:21:52.015637 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-92phz\" at 100.68.157.217:80/TCP\nI0416 04:21:52.015658 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vv8q4\" at 100.66.57.14:80/TCP\nI0416 04:21:52.015711 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6drnp\" at 100.69.125.43:80/TCP\nI0416 04:21:52.015726 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-k2chp\" at 100.71.97.153:80/TCP\nI0416 04:21:52.015755 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dr54h\" at 100.70.103.136:80/TCP\nI0416 04:21:52.015787 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-h5rwt\" at 100.69.93.81:80/TCP\nI0416 04:21:52.015801 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-c627r\" at 100.70.202.147:80/TCP\nI0416 04:21:52.015841 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2k6bk\" at 100.65.47.68:80/TCP\nI0416 04:21:52.015853 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-lrd6g\" at 100.67.38.52:80/TCP\nI0416 04:21:52.015863 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wvw6k\" at 100.66.252.249:80/TCP\nI0416 04:21:52.015873 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dks2n\" at 100.66.128.94:80/TCP\nI0416 04:21:52.015900 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7wnrp\" at 100.67.107.151:80/TCP\nI0416 04:21:52.015936 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-rkxrw\" at 100.67.6.129:80/TCP\nI0416 04:21:52.015949 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2jvgg\" at 100.66.103.196:80/TCP\nI0416 04:21:52.015973 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qp4nv\" at 100.67.32.71:80/TCP\nI0416 04:21:52.015988 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-95lzs\" at 100.67.237.130:80/TCP\nI0416 04:21:52.016001 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zhk72\" at 100.68.66.49:80/TCP\nI0416 04:21:52.016013 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-j8895\" at 100.66.80.236:80/TCP\nI0416 04:21:52.016023 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-bk9mt\" at 100.67.199.27:80/TCP\nI0416 04:21:52.016063 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wp8xr\" at 100.66.152.62:80/TCP\nI0416 04:21:52.016298 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:52.080559 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"64.914979ms\"\nI0416 04:21:52.116888 1 service.go:301] Service svc-latency-3981/latency-svc-2zdpq updated: 1 ports\nI0416 04:21:52.167561 1 service.go:301] Service svc-latency-3981/latency-svc-w9f2h updated: 1 ports\nI0416 04:21:52.222260 1 service.go:301] Service svc-latency-3981/latency-svc-v4227 updated: 1 ports\nI0416 04:21:52.267002 1 service.go:301] Service svc-latency-3981/latency-svc-x7hwk updated: 1 ports\nI0416 04:21:52.331972 1 service.go:301] Service svc-latency-3981/latency-svc-hdl8p updated: 1 ports\nI0416 04:21:52.365866 1 service.go:301] Service svc-latency-3981/latency-svc-b5hzv updated: 1 ports\nI0416 04:21:52.415207 1 service.go:301] Service svc-latency-3981/latency-svc-kr2z8 updated: 1 ports\nI0416 04:21:52.467046 1 service.go:301] Service svc-latency-3981/latency-svc-2lp2x updated: 1 ports\nI0416 04:21:52.534789 1 service.go:301] Service svc-latency-3981/latency-svc-sgq7s updated: 1 ports\nI0416 04:21:52.584818 1 service.go:301] Service svc-latency-3981/latency-svc-4f65r updated: 1 ports\nI0416 04:21:52.669277 1 service.go:301] Service svc-latency-3981/latency-svc-d7xgc updated: 1 ports\nI0416 04:21:52.717226 1 service.go:301] Service svc-latency-3981/latency-svc-szgj5 updated: 1 ports\nI0416 04:21:52.773794 1 service.go:301] Service svc-latency-3981/latency-svc-zwsbh updated: 1 ports\nI0416 04:21:52.823781 1 service.go:301] Service svc-latency-3981/latency-svc-j87cw updated: 1 ports\nI0416 04:21:52.868051 1 service.go:301] Service svc-latency-3981/latency-svc-b5snz updated: 1 ports\nI0416 04:21:52.920602 1 service.go:301] Service svc-latency-3981/latency-svc-lmc2v updated: 1 ports\nI0416 04:21:52.964935 1 service.go:301] Service svc-latency-3981/latency-svc-66nfb updated: 1 ports\nI0416 04:21:53.016785 1 service.go:301] Service svc-latency-3981/latency-svc-vmjrn updated: 1 ports\nI0416 04:21:53.017217 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-w9f2h\" at 100.70.153.241:80/TCP\nI0416 04:21:53.017239 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hdl8p\" at 100.71.21.219:80/TCP\nI0416 04:21:53.017271 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kr2z8\" at 100.71.190.233:80/TCP\nI0416 04:21:53.017310 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d7xgc\" at 100.70.87.162:80/TCP\nI0416 04:21:53.017340 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-v4227\" at 100.67.241.152:80/TCP\nI0416 04:21:53.017364 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2lp2x\" at 100.71.203.82:80/TCP\nI0416 04:21:53.017380 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sgq7s\" at 100.70.85.198:80/TCP\nI0416 04:21:53.017396 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-j87cw\" at 100.68.122.83:80/TCP\nI0416 04:21:53.017411 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-x7hwk\" at 100.70.132.99:80/TCP\nI0416 04:21:53.017449 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-b5hzv\" at 100.71.99.253:80/TCP\nI0416 04:21:53.017466 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4f65r\" at 100.70.255.200:80/TCP\nI0416 04:21:53.017491 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zwsbh\" at 100.64.3.41:80/TCP\nI0416 04:21:53.017504 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-66nfb\" at 100.64.78.162:80/TCP\nI0416 04:21:53.017670 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vmjrn\" at 100.71.178.204:80/TCP\nI0416 04:21:53.017685 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2zdpq\" at 100.64.87.205:80/TCP\nI0416 04:21:53.017711 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-szgj5\" at 100.64.66.132:80/TCP\nI0416 04:21:53.017721 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-b5snz\" at 100.70.43.94:80/TCP\nI0416 04:21:53.017736 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-lmc2v\" at 100.71.188.87:80/TCP\nI0416 04:21:53.017987 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:53.067316 1 service.go:301] Service svc-latency-3981/latency-svc-sspvt updated: 1 ports\nI0416 04:21:53.085372 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"68.147318ms\"\nI0416 04:21:53.124898 1 service.go:301] Service svc-latency-3981/latency-svc-b9qn6 updated: 1 ports\nI0416 04:21:53.167037 1 service.go:301] Service svc-latency-3981/latency-svc-8nwp6 updated: 1 ports\nI0416 04:21:53.216821 1 service.go:301] Service svc-latency-3981/latency-svc-4k86d updated: 1 ports\nI0416 04:21:53.268126 1 service.go:301] Service svc-latency-3981/latency-svc-9gtqh updated: 1 ports\nI0416 04:21:53.316526 1 service.go:301] Service svc-latency-3981/latency-svc-sn88h updated: 1 ports\nI0416 04:21:53.393640 1 service.go:301] Service svc-latency-3981/latency-svc-2clqv updated: 1 ports\nI0416 04:21:53.418095 1 service.go:301] Service svc-latency-3981/latency-svc-ccwh2 updated: 1 ports\nI0416 04:21:53.478680 1 service.go:301] Service svc-latency-3981/latency-svc-kfp8t updated: 1 ports\nI0416 04:21:53.517113 1 service.go:301] Service svc-latency-3981/latency-svc-l8znx updated: 1 ports\nI0416 04:21:53.639224 1 service.go:301] Service svc-latency-3981/latency-svc-kld75 updated: 1 ports\nI0416 04:21:54.028842 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sspvt\" at 100.68.104.81:80/TCP\nI0416 04:21:54.029035 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sn88h\" at 100.68.47.26:80/TCP\nI0416 04:21:54.029130 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2clqv\" at 100.64.185.218:80/TCP\nI0416 04:21:54.029210 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kfp8t\" at 100.71.84.228:80/TCP\nI0416 04:21:54.029287 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kld75\" at 100.68.208.69:80/TCP\nI0416 04:21:54.029373 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-l8znx\" at 100.68.234.147:80/TCP\nI0416 04:21:54.029459 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-b9qn6\" at 100.64.142.192:80/TCP\nI0416 04:21:54.029605 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8nwp6\" at 100.68.212.250:80/TCP\nI0416 04:21:54.029694 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4k86d\" at 100.64.148.54:80/TCP\nI0416 04:21:54.029777 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9gtqh\" at 100.66.27.216:80/TCP\nI0416 04:21:54.029846 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-ccwh2\" at 100.70.95.200:80/TCP\nI0416 04:21:54.030380 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:54.110854 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"82.011178ms\"\nI0416 04:21:55.111363 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:55.168987 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"57.737855ms\"\nI0416 04:21:56.539746 1 service.go:301] Service webhook-4171/e2e-test-webhook updated: 1 ports\nI0416 04:21:56.539784 1 service.go:416] Adding new service port \"webhook-4171/e2e-test-webhook\" at 100.67.32.153:8443/TCP\nI0416 04:21:56.540065 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:56.606359 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"66.571246ms\"\nI0416 04:21:57.606619 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:57.700907 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"94.423606ms\"\nI0416 04:21:59.573480 1 service.go:301] Service svc-latency-3981/latency-svc-24kvw updated: 0 ports\nI0416 04:21:59.573517 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-24kvw\"\nI0416 04:21:59.573779 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:59.595213 1 service.go:301] Service svc-latency-3981/latency-svc-2clqv updated: 0 ports\nI0416 04:21:59.614547 1 service.go:301] Service svc-latency-3981/latency-svc-2jvgg updated: 0 ports\nI0416 04:21:59.662847 1 service.go:301] Service svc-latency-3981/latency-svc-2k6bk updated: 0 ports\nI0416 04:21:59.677426 1 service.go:301] Service svc-latency-3981/latency-svc-2lf5w updated: 0 ports\nI0416 04:21:59.696326 1 service.go:301] Service svc-latency-3981/latency-svc-2lp2x updated: 0 ports\nI0416 04:21:59.705199 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"131.671879ms\"\nI0416 04:21:59.705285 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2clqv\"\nI0416 04:21:59.705346 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2jvgg\"\nI0416 04:21:59.705431 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2k6bk\"\nI0416 04:21:59.705522 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2lf5w\"\nI0416 04:21:59.705621 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2lp2x\"\nI0416 04:21:59.705908 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:59.717476 1 service.go:301] Service svc-latency-3981/latency-svc-2w5ct updated: 0 ports\nI0416 04:21:59.735732 1 service.go:301] Service svc-latency-3981/latency-svc-2x2b2 updated: 0 ports\nI0416 04:21:59.748212 1 service.go:301] Service svc-latency-3981/latency-svc-2zdpq updated: 0 ports\nI0416 04:21:59.773669 1 service.go:301] Service svc-latency-3981/latency-svc-444nw updated: 0 ports\nI0416 04:21:59.786642 1 service.go:301] Service svc-latency-3981/latency-svc-44wwt updated: 0 ports\nI0416 04:21:59.797976 1 service.go:301] Service svc-latency-3981/latency-svc-46zh6 updated: 0 ports\nI0416 04:21:59.817331 1 service.go:301] Service svc-latency-3981/latency-svc-49w6x updated: 0 ports\nI0416 04:21:59.834594 1 service.go:301] Service svc-latency-3981/latency-svc-4bncr updated: 0 ports\nI0416 04:21:59.836746 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"131.453699ms\"\nI0416 04:21:59.844525 1 service.go:301] Service svc-latency-3981/latency-svc-4dzlb updated: 0 ports\nI0416 04:21:59.854165 1 service.go:301] Service svc-latency-3981/latency-svc-4f65r updated: 0 ports\nI0416 04:21:59.869919 1 service.go:301] Service svc-latency-3981/latency-svc-4k86d updated: 0 ports\nI0416 04:21:59.884622 1 service.go:301] Service svc-latency-3981/latency-svc-4mdx5 updated: 0 ports\nI0416 04:21:59.893316 1 service.go:301] Service svc-latency-3981/latency-svc-4sxrn updated: 0 ports\nI0416 04:21:59.906603 1 service.go:301] Service svc-latency-3981/latency-svc-4wd85 updated: 0 ports\nI0416 04:21:59.914436 1 service.go:301] Service svc-latency-3981/latency-svc-56brr updated: 0 ports\nI0416 04:21:59.922857 1 service.go:301] Service svc-latency-3981/latency-svc-5d7hs updated: 0 ports\nI0416 04:21:59.934625 1 service.go:301] Service svc-latency-3981/latency-svc-5dll9 updated: 0 ports\nI0416 04:21:59.946547 1 service.go:301] Service svc-latency-3981/latency-svc-5jdjd updated: 0 ports\nI0416 04:21:59.953286 1 service.go:301] Service svc-latency-3981/latency-svc-5t6sm updated: 0 ports\nI0416 04:21:59.961479 1 service.go:301] Service svc-latency-3981/latency-svc-66nfb updated: 0 ports\nI0416 04:21:59.986708 1 service.go:301] Service svc-latency-3981/latency-svc-6d8r4 updated: 0 ports\nI0416 04:21:59.995304 1 service.go:301] Service svc-latency-3981/latency-svc-6drnp updated: 0 ports\nI0416 04:22:00.010341 1 service.go:301] Service svc-latency-3981/latency-svc-6kkhg updated: 0 ports\nI0416 04:22:00.019398 1 service.go:301] Service svc-latency-3981/latency-svc-6kn8x updated: 0 ports\nI0416 04:22:00.030645 1 service.go:301] Service svc-latency-3981/latency-svc-6pnn8 updated: 0 ports\nI0416 04:22:00.039459 1 service.go:301] Service svc-latency-3981/latency-svc-6txzc updated: 0 ports\nI0416 04:22:00.045956 1 service.go:301] Service svc-latency-3981/latency-svc-7fk9l updated: 0 ports\nI0416 04:22:00.052722 1 service.go:301] Service svc-latency-3981/latency-svc-7gg2n updated: 0 ports\nI0416 04:22:00.059149 1 service.go:301] Service svc-latency-3981/latency-svc-7mgwx updated: 0 ports\nI0416 04:22:00.067591 1 service.go:301] Service svc-latency-3981/latency-svc-7vw8t updated: 0 ports\nI0416 04:22:00.085532 1 service.go:301] Service svc-latency-3981/latency-svc-7wnrp updated: 0 ports\nI0416 04:22:00.099480 1 service.go:301] Service svc-latency-3981/latency-svc-86r5n updated: 0 ports\nI0416 04:22:00.112607 1 service.go:301] Service svc-latency-3981/latency-svc-8f5zr updated: 0 ports\nI0416 04:22:00.128155 1 service.go:301] Service svc-latency-3981/latency-svc-8jtd7 updated: 0 ports\nI0416 04:22:00.159838 1 service.go:301] Service svc-latency-3981/latency-svc-8kwfl updated: 0 ports\nI0416 04:22:00.170943 1 service.go:301] Service svc-latency-3981/latency-svc-8nwp6 updated: 0 ports\nI0416 04:22:00.186542 1 service.go:301] Service svc-latency-3981/latency-svc-8v8st updated: 0 ports\nI0416 04:22:00.193072 1 service.go:301] Service svc-latency-3981/latency-svc-92phz updated: 0 ports\nI0416 04:22:00.209256 1 service.go:301] Service svc-latency-3981/latency-svc-9422l updated: 0 ports\nI0416 04:22:00.218209 1 service.go:301] Service svc-latency-3981/latency-svc-95lzs updated: 0 ports\nI0416 04:22:00.229899 1 service.go:301] Service svc-latency-3981/latency-svc-9fnrp updated: 0 ports\nI0416 04:22:00.238210 1 service.go:301] Service svc-latency-3981/latency-svc-9gtqh updated: 0 ports\nI0416 04:22:00.245485 1 service.go:301] Service svc-latency-3981/latency-svc-9jdjb updated: 0 ports\nI0416 04:22:00.252441 1 service.go:301] Service svc-latency-3981/latency-svc-9nx8j updated: 0 ports\nI0416 04:22:00.260675 1 service.go:301] Service svc-latency-3981/latency-svc-9rjdq updated: 0 ports\nI0416 04:22:00.269268 1 service.go:301] Service svc-latency-3981/latency-svc-9w255 updated: 0 ports\nI0416 04:22:00.292996 1 service.go:301] Service svc-latency-3981/latency-svc-b5hzv updated: 0 ports\nI0416 04:22:00.300322 1 service.go:301] Service svc-latency-3981/latency-svc-b5snz updated: 0 ports\nI0416 04:22:00.307146 1 service.go:301] Service svc-latency-3981/latency-svc-b9qn6 updated: 0 ports\nI0416 04:22:00.315812 1 service.go:301] Service svc-latency-3981/latency-svc-bk9mt updated: 0 ports\nI0416 04:22:00.323590 1 service.go:301] Service svc-latency-3981/latency-svc-bkp2m updated: 0 ports\nI0416 04:22:00.342687 1 service.go:301] Service svc-latency-3981/latency-svc-bxzdt updated: 0 ports\nI0416 04:22:00.354939 1 service.go:301] Service svc-latency-3981/latency-svc-c627r updated: 0 ports\nI0416 04:22:00.363265 1 service.go:301] Service svc-latency-3981/latency-svc-c8p2l updated: 0 ports\nI0416 04:22:00.371263 1 service.go:301] Service svc-latency-3981/latency-svc-ccwh2 updated: 0 ports\nI0416 04:22:00.381596 1 service.go:301] Service svc-latency-3981/latency-svc-cz7hv updated: 0 ports\nI0416 04:22:00.388905 1 service.go:301] Service svc-latency-3981/latency-svc-czjpm updated: 0 ports\nI0416 04:22:00.406274 1 service.go:301] Service svc-latency-3981/latency-svc-d2xrl updated: 0 ports\nI0416 04:22:00.416586 1 service.go:301] Service svc-latency-3981/latency-svc-d7rjl updated: 0 ports\nI0416 04:22:00.423354 1 service.go:301] Service svc-latency-3981/latency-svc-d7xgc updated: 0 ports\nI0416 04:22:00.432701 1 service.go:301] Service svc-latency-3981/latency-svc-d9q85 updated: 0 ports\nI0416 04:22:00.440006 1 service.go:301] Service svc-latency-3981/latency-svc-dcljc updated: 0 ports\nI0416 04:22:00.449375 1 service.go:301] Service svc-latency-3981/latency-svc-dks2n updated: 0 ports\nI0416 04:22:00.458097 1 service.go:301] Service svc-latency-3981/latency-svc-dr54h updated: 0 ports\nI0416 04:22:00.466011 1 service.go:301] Service svc-latency-3981/latency-svc-dtb4m updated: 0 ports\nI0416 04:22:00.478941 1 service.go:301] Service svc-latency-3981/latency-svc-f299t updated: 0 ports\nI0416 04:22:00.485676 1 service.go:301] Service svc-latency-3981/latency-svc-fbqps updated: 0 ports\nI0416 04:22:00.500211 1 service.go:301] Service svc-latency-3981/latency-svc-fcc8g updated: 0 ports\nI0416 04:22:00.520001 1 service.go:301] Service svc-latency-3981/latency-svc-fglqx updated: 0 ports\nI0416 04:22:00.529488 1 service.go:301] Service svc-latency-3981/latency-svc-fpts9 updated: 0 ports\nI0416 04:22:00.537127 1 service.go:301] Service svc-latency-3981/latency-svc-fw4ht updated: 0 ports\nI0416 04:22:00.554368 1 service.go:301] Service svc-latency-3981/latency-svc-fzc9p updated: 0 ports\nI0416 04:22:00.594246 1 service.go:301] Service svc-latency-3981/latency-svc-g248w updated: 0 ports\nI0416 04:22:00.594390 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4sxrn\"\nI0416 04:22:00.594469 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7fk9l\"\nI0416 04:22:00.594536 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8f5zr\"\nI0416 04:22:00.594580 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9jdjb\"\nI0416 04:22:00.594639 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-bk9mt\"\nI0416 04:22:00.594683 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-c627r\"\nI0416 04:22:00.594743 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4bncr\"\nI0416 04:22:00.594785 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6kn8x\"\nI0416 04:22:00.594848 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-ccwh2\"\nI0416 04:22:00.594898 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4mdx5\"\nI0416 04:22:00.594981 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6kkhg\"\nI0416 04:22:00.595056 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fzc9p\"\nI0416 04:22:00.595118 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g248w\"\nI0416 04:22:00.595183 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-444nw\"\nI0416 04:22:00.595243 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5dll9\"\nI0416 04:22:00.595308 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6pnn8\"\nI0416 04:22:00.595369 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-bxzdt\"\nI0416 04:22:00.595432 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-cz7hv\"\nI0416 04:22:00.595493 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fw4ht\"\nI0416 04:22:00.595556 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dr54h\"\nI0416 04:22:00.595624 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fcc8g\"\nI0416 04:22:00.595692 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6txzc\"\nI0416 04:22:00.595741 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8kwfl\"\nI0416 04:22:00.595805 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8nwp6\"\nI0416 04:22:00.595847 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9gtqh\"\nI0416 04:22:00.595909 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-b9qn6\"\nI0416 04:22:00.595954 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d7rjl\"\nI0416 04:22:00.596023 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-c8p2l\"\nI0416 04:22:00.596104 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fbqps\"\nI0416 04:22:00.596159 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4dzlb\"\nI0416 04:22:00.596254 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4f65r\"\nI0416 04:22:00.596340 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4k86d\"\nI0416 04:22:00.596418 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8v8st\"\nI0416 04:22:00.596679 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-95lzs\"\nI0416 04:22:00.596815 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-b5hzv\"\nI0416 04:22:00.596892 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-66nfb\"\nI0416 04:22:00.596999 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7mgwx\"\nI0416 04:22:00.597072 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d7xgc\"\nI0416 04:22:00.597134 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4wd85\"\nI0416 04:22:00.597209 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-b5snz\"\nI0416 04:22:00.597269 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2w5ct\"\nI0416 04:22:00.597338 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7gg2n\"\nI0416 04:22:00.597393 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-92phz\"\nI0416 04:22:00.597467 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-czjpm\"\nI0416 04:22:00.597557 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9rjdq\"\nI0416 04:22:00.597632 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2zdpq\"\nI0416 04:22:00.599590 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-44wwt\"\nI0416 04:22:00.599908 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-56brr\"\nI0416 04:22:00.600004 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5d7hs\"\nI0416 04:22:00.600089 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7wnrp\"\nI0416 04:22:00.600170 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9fnrp\"\nI0416 04:22:00.600255 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2x2b2\"\nI0416 04:22:00.600332 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-46zh6\"\nI0416 04:22:00.600417 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6d8r4\"\nI0416 04:22:00.600519 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-bkp2m\"\nI0416 04:22:00.600592 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d2xrl\"\nI0416 04:22:00.600642 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d9q85\"\nI0416 04:22:00.600719 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9422l\"\nI0416 04:22:00.600786 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-f299t\"\nI0416 04:22:00.600854 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dks2n\"\nI0416 04:22:00.600926 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fpts9\"\nI0416 04:22:00.600988 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-49w6x\"\nI0416 04:22:00.601044 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5t6sm\"\nI0416 04:22:00.601127 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7vw8t\"\nI0416 04:22:00.601187 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8jtd7\"\nI0416 04:22:00.601247 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9nx8j\"\nI0416 04:22:00.601289 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9w255\"\nI0416 04:22:00.601346 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5jdjd\"\nI0416 04:22:00.601388 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6drnp\"\nI0416 04:22:00.601442 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-86r5n\"\nI0416 04:22:00.601484 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dcljc\"\nI0416 04:22:00.601554 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dtb4m\"\nI0416 04:22:00.601616 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fglqx\"\nI0416 04:22:00.601938 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:00.639182 1 service.go:301] Service svc-latency-3981/latency-svc-g2th9 updated: 0 ports\nI0416 04:22:00.669562 1 service.go:301] Service svc-latency-3981/latency-svc-g6dmz updated: 0 ports\nI0416 04:22:00.695916 1 service.go:301] Service svc-latency-3981/latency-svc-g6t7j updated: 0 ports\nI0416 04:22:00.709251 1 service.go:301] Service svc-latency-3981/latency-svc-g85nc updated: 0 ports\nI0416 04:22:00.730230 1 service.go:301] Service svc-latency-3981/latency-svc-g947j updated: 0 ports\nI0416 04:22:00.747074 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"152.663559ms\"\nI0416 04:22:00.748559 1 service.go:301] Service svc-latency-3981/latency-svc-gdvbv updated: 0 ports\nI0416 04:22:00.773054 1 service.go:301] Service svc-latency-3981/latency-svc-ggwzr updated: 0 ports\nI0416 04:22:00.787629 1 service.go:301] Service svc-latency-3981/latency-svc-gmxj9 updated: 0 ports\nI0416 04:22:00.798241 1 service.go:301] Service svc-latency-3981/latency-svc-gz4hw updated: 0 ports\nI0416 04:22:00.814049 1 service.go:301] Service svc-latency-3981/latency-svc-h5rwt updated: 0 ports\nI0416 04:22:00.820639 1 service.go:301] Service svc-latency-3981/latency-svc-h9jzs updated: 0 ports\nI0416 04:22:00.841195 1 service.go:301] Service svc-latency-3981/latency-svc-hdl8p updated: 0 ports\nI0416 04:22:00.849516 1 service.go:301] Service svc-latency-3981/latency-svc-hkppp updated: 0 ports\nI0416 04:22:00.856026 1 service.go:301] Service svc-latency-3981/latency-svc-hn2xn updated: 0 ports\nI0416 04:22:00.863272 1 service.go:301] Service svc-latency-3981/latency-svc-hsjnr updated: 0 ports\nI0416 04:22:00.871035 1 service.go:301] Service svc-latency-3981/latency-svc-j87cw updated: 0 ports\nI0416 04:22:00.879269 1 service.go:301] Service svc-latency-3981/latency-svc-j8895 updated: 0 ports\nI0416 04:22:00.889595 1 service.go:301] Service svc-latency-3981/latency-svc-j88m6 updated: 0 ports\nI0416 04:22:00.904170 1 service.go:301] Service svc-latency-3981/latency-svc-jnmlb updated: 0 ports\nI0416 04:22:00.913184 1 service.go:301] Service svc-latency-3981/latency-svc-jqc6f updated: 0 ports\nI0416 04:22:00.919309 1 service.go:301] Service svc-latency-3981/latency-svc-jrn8c updated: 0 ports\nI0416 04:22:00.930185 1 service.go:301] Service svc-latency-3981/latency-svc-k2chp updated: 0 ports\nI0416 04:22:00.944107 1 service.go:301] Service svc-latency-3981/latency-svc-k6gmx updated: 0 ports\nI0416 04:22:00.953658 1 service.go:301] Service svc-latency-3981/latency-svc-k74n8 updated: 0 ports\nI0416 04:22:00.960224 1 service.go:301] Service svc-latency-3981/latency-svc-kfp8t updated: 0 ports\nI0416 04:22:00.967288 1 service.go:301] Service svc-latency-3981/latency-svc-kjg25 updated: 0 ports\nI0416 04:22:00.975309 1 service.go:301] Service svc-latency-3981/latency-svc-kk9qz updated: 0 ports\nI0416 04:22:00.983283 1 service.go:301] Service svc-latency-3981/latency-svc-kld75 updated: 0 ports\nI0416 04:22:00.991847 1 service.go:301] Service svc-latency-3981/latency-svc-knbng updated: 0 ports\nI0416 04:22:01.000702 1 service.go:301] Service svc-latency-3981/latency-svc-kr2r7 updated: 0 ports\nI0416 04:22:01.008263 1 service.go:301] Service svc-latency-3981/latency-svc-kr2z8 updated: 0 ports\nI0416 04:22:01.016017 1 service.go:301] Service svc-latency-3981/latency-svc-kw5hl updated: 0 ports\nI0416 04:22:01.023916 1 service.go:301] Service svc-latency-3981/latency-svc-kwzpv updated: 0 ports\nI0416 04:22:01.031817 1 service.go:301] Service svc-latency-3981/latency-svc-l5k7g updated: 0 ports\nI0416 04:22:01.039367 1 service.go:301] Service svc-latency-3981/latency-svc-l6j5h updated: 0 ports\nI0416 04:22:01.059016 1 service.go:301] Service svc-latency-3981/latency-svc-l8znx updated: 0 ports\nI0416 04:22:01.066543 1 service.go:301] Service svc-latency-3981/latency-svc-lhtjh updated: 0 ports\nI0416 04:22:01.076847 1 service.go:301] Service svc-latency-3981/latency-svc-lmc2v updated: 0 ports\nI0416 04:22:01.085059 1 service.go:301] Service svc-latency-3981/latency-svc-lrd6g updated: 0 ports\nI0416 04:22:01.095089 1 service.go:301] Service svc-latency-3981/latency-svc-m9cps updated: 0 ports\nI0416 04:22:01.104223 1 service.go:301] Service svc-latency-3981/latency-svc-mj5h5 updated: 0 ports\nI0416 04:22:01.112637 1 service.go:301] Service svc-latency-3981/latency-svc-mk26h updated: 0 ports\nI0416 04:22:01.122111 1 service.go:301] Service svc-latency-3981/latency-svc-msttv updated: 0 ports\nI0416 04:22:01.137864 1 service.go:301] Service svc-latency-3981/latency-svc-n2tgb updated: 0 ports\nI0416 04:22:01.144778 1 service.go:301] Service svc-latency-3981/latency-svc-n8knn updated: 0 ports\nI0416 04:22:01.153768 1 service.go:301] Service svc-latency-3981/latency-svc-nmb8h updated: 0 ports\nI0416 04:22:01.165379 1 service.go:301] Service svc-latency-3981/latency-svc-p7zbc updated: 0 ports\nI0416 04:22:01.173227 1 service.go:301] Service svc-latency-3981/latency-svc-p9h96 updated: 0 ports\nI0416 04:22:01.180834 1 service.go:301] Service svc-latency-3981/latency-svc-pgcfp updated: 0 ports\nI0416 04:22:01.188695 1 service.go:301] Service svc-latency-3981/latency-svc-pjccn updated: 0 ports\nI0416 04:22:01.196149 1 service.go:301] Service svc-latency-3981/latency-svc-pkvbd updated: 0 ports\nI0416 04:22:01.206326 1 service.go:301] Service svc-latency-3981/latency-svc-pq8ml updated: 0 ports\nI0416 04:22:01.213361 1 service.go:301] Service svc-latency-3981/latency-svc-q4257 updated: 0 ports\nI0416 04:22:01.220591 1 service.go:301] Service svc-latency-3981/latency-svc-q4kv2 updated: 0 ports\nI0416 04:22:01.228775 1 service.go:301] Service svc-latency-3981/latency-svc-q6x9d updated: 0 ports\nI0416 04:22:01.238033 1 service.go:301] Service svc-latency-3981/latency-svc-q7xh9 updated: 0 ports\nI0416 04:22:01.246145 1 service.go:301] Service svc-latency-3981/latency-svc-qgpmd updated: 0 ports\nI0416 04:22:01.254792 1 service.go:301] Service svc-latency-3981/latency-svc-qjn2p updated: 0 ports\nI0416 04:22:01.263242 1 service.go:301] Service svc-latency-3981/latency-svc-qkpxm updated: 0 ports\nI0416 04:22:01.293199 1 service.go:301] Service svc-latency-3981/latency-svc-qmd8f updated: 0 ports\nI0416 04:22:01.311790 1 service.go:301] Service svc-latency-3981/latency-svc-qp4nv updated: 0 ports\nI0416 04:22:01.324162 1 service.go:301] Service svc-latency-3981/latency-svc-qrjc4 updated: 0 ports\nI0416 04:22:01.333919 1 service.go:301] Service svc-latency-3981/latency-svc-qrm7g updated: 0 ports\nI0416 04:22:01.339679 1 service.go:301] Service svc-latency-3981/latency-svc-qx9r9 updated: 0 ports\nI0416 04:22:01.347129 1 service.go:301] Service svc-latency-3981/latency-svc-qxhct updated: 0 ports\nI0416 04:22:01.356724 1 service.go:301] Service svc-latency-3981/latency-svc-r7jf4 updated: 0 ports\nI0416 04:22:01.365112 1 service.go:301] Service svc-latency-3981/latency-svc-r7mp2 updated: 0 ports\nI0416 04:22:01.386872 1 service.go:301] Service svc-latency-3981/latency-svc-r8wlh updated: 0 ports\nI0416 04:22:01.397467 1 service.go:301] Service svc-latency-3981/latency-svc-rdtjw updated: 0 ports\nI0416 04:22:01.403783 1 service.go:301] Service svc-latency-3981/latency-svc-rkxrw updated: 0 ports\nI0416 04:22:01.412728 1 service.go:301] Service svc-latency-3981/latency-svc-rqlv9 updated: 0 ports\nI0416 04:22:01.425976 1 service.go:301] Service svc-latency-3981/latency-svc-sgq7s updated: 0 ports\nI0416 04:22:01.439945 1 service.go:301] Service svc-latency-3981/latency-svc-sjbwr updated: 0 ports\nI0416 04:22:01.453150 1 service.go:301] Service svc-latency-3981/latency-svc-sn88h updated: 0 ports\nI0416 04:22:01.476275 1 service.go:301] Service svc-latency-3981/latency-svc-snpps updated: 0 ports\nI0416 04:22:01.538273 1 service.go:301] Service svc-latency-3981/latency-svc-sspvt updated: 0 ports\nI0416 04:22:01.558419 1 service.go:301] Service svc-latency-3981/latency-svc-sx4wq updated: 0 ports\nI0416 04:22:01.576605 1 service.go:301] Service svc-latency-3981/latency-svc-szgj5 updated: 0 ports\nI0416 04:22:01.576780 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-nmb8h\"\nI0416 04:22:01.576797 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q4257\"\nI0416 04:22:01.576857 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qxhct\"\nI0416 04:22:01.576868 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-r8wlh\"\nI0416 04:22:01.576902 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-szgj5\"\nI0416 04:22:01.576937 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-h5rwt\"\nI0416 04:22:01.576971 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-h9jzs\"\nI0416 04:22:01.576984 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-n2tgb\"\nI0416 04:22:01.576991 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hdl8p\"\nI0416 04:22:01.577032 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-jnmlb\"\nI0416 04:22:01.577046 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sspvt\"\nI0416 04:22:01.577054 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-k2chp\"\nI0416 04:22:01.577091 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pkvbd\"\nI0416 04:22:01.577102 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qgpmd\"\nI0416 04:22:01.577109 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pjccn\"\nI0416 04:22:01.577128 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-rkxrw\"\nI0416 04:22:01.577166 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kfp8t\"\nI0416 04:22:01.577201 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kk9qz\"\nI0416 04:22:01.577212 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-n8knn\"\nI0416 04:22:01.577232 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q7xh9\"\nI0416 04:22:01.577278 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qkpxm\"\nI0416 04:22:01.577303 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-j87cw\"\nI0416 04:22:01.577327 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-k74n8\"\nI0416 04:22:01.577335 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kr2z8\"\nI0416 04:22:01.577379 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hsjnr\"\nI0416 04:22:01.577392 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-m9cps\"\nI0416 04:22:01.577435 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kld75\"\nI0416 04:22:01.577447 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-lrd6g\"\nI0416 04:22:01.577454 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-mk26h\"\nI0416 04:22:01.577461 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g6t7j\"\nI0416 04:22:01.577505 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-gdvbv\"\nI0416 04:22:01.577516 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hn2xn\"\nI0416 04:22:01.577523 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-lmc2v\"\nI0416 04:22:01.577533 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-p7zbc\"\nI0416 04:22:01.577540 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-j88m6\"\nI0416 04:22:01.577600 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kjg25\"\nI0416 04:22:01.577611 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-l8znx\"\nI0416 04:22:01.577618 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qrjc4\"\nI0416 04:22:01.577672 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-r7mp2\"\nI0416 04:22:01.577714 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sn88h\"\nI0416 04:22:01.577726 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-jqc6f\"\nI0416 04:22:01.577817 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-l6j5h\"\nI0416 04:22:01.577827 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qmd8f\"\nI0416 04:22:01.577834 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qp4nv\"\nI0416 04:22:01.577841 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-rdtjw\"\nI0416 04:22:01.577874 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-rqlv9\"\nI0416 04:22:01.577916 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sjbwr\"\nI0416 04:22:01.577927 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-snpps\"\nI0416 04:22:01.577934 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g85nc\"\nI0416 04:22:01.577942 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-gz4hw\"\nI0416 04:22:01.577977 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-k6gmx\"\nI0416 04:22:01.577988 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q4kv2\"\nI0416 04:22:01.577996 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q6x9d\"\nI0416 04:22:01.578045 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qrm7g\"\nI0416 04:22:01.578074 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g947j\"\nI0416 04:22:01.578099 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-gmxj9\"\nI0416 04:22:01.578128 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kwzpv\"\nI0416 04:22:01.578165 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-knbng\"\nI0416 04:22:01.578173 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kr2r7\"\nI0416 04:22:01.578180 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-mj5h5\"\nI0416 04:22:01.578186 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-msttv\"\nI0416 04:22:01.578194 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-p9h96\"\nI0416 04:22:01.578215 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g6dmz\"\nI0416 04:22:01.578235 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-ggwzr\"\nI0416 04:22:01.578258 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hkppp\"\nI0416 04:22:01.578280 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-r7jf4\"\nI0416 04:22:01.578303 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sgq7s\"\nI0416 04:22:01.578326 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-jrn8c\"\nI0416 04:22:01.578344 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-lhtjh\"\nI0416 04:22:01.578369 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qjn2p\"\nI0416 04:22:01.578393 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-j8895\"\nI0416 04:22:01.578411 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-l5k7g\"\nI0416 04:22:01.578437 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qx9r9\"\nI0416 04:22:01.578465 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pq8ml\"\nI0416 04:22:01.578486 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sx4wq\"\nI0416 04:22:01.578508 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g2th9\"\nI0416 04:22:01.578526 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kw5hl\"\nI0416 04:22:01.578550 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pgcfp\"\nI0416 04:22:01.578720 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:01.587543 1 service.go:301] Service svc-latency-3981/latency-svc-t7sb8 updated: 0 ports\nI0416 04:22:01.607956 1 service.go:301] Service svc-latency-3981/latency-svc-t9ht8 updated: 0 ports\nI0416 04:22:01.618601 1 service.go:301] Service svc-latency-3981/latency-svc-tfgb8 updated: 0 ports\nI0416 04:22:01.641161 1 service.go:301] Service svc-latency-3981/latency-svc-tmz8w updated: 0 ports\nI0416 04:22:01.649840 1 service.go:301] Service svc-latency-3981/latency-svc-tnsf2 updated: 0 ports\nI0416 04:22:01.676289 1 service.go:301] Service svc-latency-3981/latency-svc-v28nm updated: 0 ports\nI0416 04:22:01.689213 1 service.go:301] Service svc-latency-3981/latency-svc-v4227 updated: 0 ports\nI0416 04:22:01.701818 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"125.034478ms\"\nI0416 04:22:01.707505 1 service.go:301] Service svc-latency-3981/latency-svc-v9ngm updated: 0 ports\nI0416 04:22:01.719974 1 service.go:301] Service svc-latency-3981/latency-svc-vcpfc updated: 0 ports\nI0416 04:22:01.733211 1 service.go:301] Service svc-latency-3981/latency-svc-vhd5k updated: 0 ports\nI0416 04:22:01.742092 1 service.go:301] Service svc-latency-3981/latency-svc-vmjrn updated: 0 ports\nI0416 04:22:01.752694 1 service.go:301] Service svc-latency-3981/latency-svc-vv8q4 updated: 0 ports\nI0416 04:22:01.762373 1 service.go:301] Service svc-latency-3981/latency-svc-w8pmh updated: 0 ports\nI0416 04:22:01.771315 1 service.go:301] Service svc-latency-3981/latency-svc-w9f2h updated: 0 ports\nI0416 04:22:01.783274 1 service.go:301] Service svc-latency-3981/latency-svc-wb26g updated: 0 ports\nI0416 04:22:01.792786 1 service.go:301] Service svc-latency-3981/latency-svc-wc4b7 updated: 0 ports\nI0416 04:22:01.805705 1 service.go:301] Service svc-latency-3981/latency-svc-wd4zt updated: 0 ports\nI0416 04:22:01.826227 1 service.go:301] Service svc-latency-3981/latency-svc-wp8xr updated: 0 ports\nI0416 04:22:01.836392 1 service.go:301] Service svc-latency-3981/latency-svc-wv8nx updated: 0 ports\nI0416 04:22:01.847876 1 service.go:301] Service svc-latency-3981/latency-svc-wvw6k updated: 0 ports\nI0416 04:22:01.865924 1 service.go:301] Service svc-latency-3981/latency-svc-wx822 updated: 0 ports\nI0416 04:22:01.879903 1 service.go:301] Service svc-latency-3981/latency-svc-wzbj8 updated: 0 ports\nI0416 04:22:01.892523 1 service.go:301] Service svc-latency-3981/latency-svc-x2k7p updated: 0 ports\nI0416 04:22:01.902543 1 service.go:301] Service svc-latency-3981/latency-svc-x7hwk updated: 0 ports\nI0416 04:22:01.911961 1 service.go:301] Service svc-latency-3981/latency-svc-x88zw updated: 0 ports\nI0416 04:22:01.925026 1 service.go:301] Service svc-latency-3981/latency-svc-xct4m updated: 0 ports\nI0416 04:22:01.943846 1 service.go:301] Service svc-latency-3981/latency-svc-xf99p updated: 0 ports\nI0416 04:22:01.958015 1 service.go:301] Service svc-latency-3981/latency-svc-xfghb updated: 0 ports\nI0416 04:22:01.968050 1 service.go:301] Service svc-latency-3981/latency-svc-xh99l updated: 0 ports\nI0416 04:22:01.982282 1 service.go:301] Service svc-latency-3981/latency-svc-xjbfs updated: 0 ports\nI0416 04:22:01.993529 1 service.go:301] Service svc-latency-3981/latency-svc-xjlnp updated: 0 ports\nI0416 04:22:02.003699 1 service.go:301] Service svc-latency-3981/latency-svc-xkcrs updated: 0 ports\nI0416 04:22:02.009981 1 service.go:301] Service svc-latency-3981/latency-svc-xkfsd updated: 0 ports\nI0416 04:22:02.017664 1 service.go:301] Service svc-latency-3981/latency-svc-xmxvf updated: 0 ports\nI0416 04:22:02.037999 1 service.go:301] Service svc-latency-3981/latency-svc-xrnwp updated: 0 ports\nI0416 04:22:02.046967 1 service.go:301] Service svc-latency-3981/latency-svc-z48q8 updated: 0 ports\nI0416 04:22:02.053663 1 service.go:301] Service svc-latency-3981/latency-svc-z5ghz updated: 0 ports\nI0416 04:22:02.060432 1 service.go:301] Service svc-latency-3981/latency-svc-z7k5t updated: 0 ports\nI0416 04:22:02.068911 1 service.go:301] Service svc-latency-3981/latency-svc-zbd7h updated: 0 ports\nI0416 04:22:02.075485 1 service.go:301] Service svc-latency-3981/latency-svc-zhk72 updated: 0 ports\nI0416 04:22:02.083484 1 service.go:301] Service svc-latency-3981/latency-svc-zpx4h updated: 0 ports\nI0416 04:22:02.090325 1 service.go:301] Service svc-latency-3981/latency-svc-zvshw updated: 0 ports\nI0416 04:22:02.099389 1 service.go:301] Service svc-latency-3981/latency-svc-zwsbh updated: 0 ports\nI0416 04:22:02.106097 1 service.go:301] Service svc-latency-3981/latency-svc-zwvpj updated: 0 ports\nI0416 04:22:02.574188 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xmxvf\"\nI0416 04:22:02.574277 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xrnwp\"\nI0416 04:22:02.574334 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zhk72\"\nI0416 04:22:02.574389 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vhd5k\"\nI0416 04:22:02.574414 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wd4zt\"\nI0416 04:22:02.574480 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wvw6k\"\nI0416 04:22:02.574509 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xkfsd\"\nI0416 04:22:02.574530 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wv8nx\"\nI0416 04:22:02.574575 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-z5ghz\"\nI0416 04:22:02.574601 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zpx4h\"\nI0416 04:22:02.574667 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zvshw\"\nI0416 04:22:02.574749 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-t9ht8\"\nI0416 04:22:02.574818 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-tnsf2\"\nI0416 04:22:02.574890 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-v9ngm\"\nI0416 04:22:02.574972 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wb26g\"\nI0416 04:22:02.575016 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zwvpj\"\nI0416 04:22:02.575075 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-v28nm\"\nI0416 04:22:02.575153 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xct4m\"\nI0416 04:22:02.575220 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-z48q8\"\nI0416 04:22:02.575272 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zbd7h\"\nI0416 04:22:02.575338 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xfghb\"\nI0416 04:22:02.575413 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-z7k5t\"\nI0416 04:22:02.575487 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-tfgb8\"\nI0416 04:22:02.580493 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-tmz8w\"\nI0416 04:22:02.580610 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-v4227\"\nI0416 04:22:02.580699 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wx822\"\nI0416 04:22:02.580787 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wc4b7\"\nI0416 04:22:02.580859 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-x2k7p\"\nI0416 04:22:02.580928 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-x88zw\"\nI0416 04:22:02.580998 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xjbfs\"\nI0416 04:22:02.581086 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xjlnp\"\nI0416 04:22:02.581168 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vmjrn\"\nI0416 04:22:02.581240 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-w8pmh\"\nI0416 04:22:02.581298 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wp8xr\"\nI0416 04:22:02.581382 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-x7hwk\"\nI0416 04:22:02.581455 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xh99l\"\nI0416 04:22:02.581525 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vcpfc\"\nI0416 04:22:02.581598 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vv8q4\"\nI0416 04:22:02.581666 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-w9f2h\"\nI0416 04:22:02.581739 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wzbj8\"\nI0416 04:22:02.581808 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-t7sb8\"\nI0416 04:22:02.581881 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xf99p\"\nI0416 04:22:02.581951 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xkcrs\"\nI0416 04:22:02.582029 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zwsbh\"\nI0416 04:22:02.583703 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:02.668964 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"94.777155ms\"\nI0416 04:22:03.670021 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:03.706997 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.170973ms\"\nI0416 04:22:05.903730 1 service.go:301] Service services-9070/affinity-clusterip-timeout updated: 1 ports\nI0416 04:22:05.903925 1 service.go:416] Adding new service port \"services-9070/affinity-clusterip-timeout\" at 100.70.111.217:80/TCP\nI0416 04:22:05.904139 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:05.934660 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.783738ms\"\nI0416 04:22:05.934772 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:05.966409 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.718832ms\"\nI0416 04:22:07.682870 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:07.725441 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.677291ms\"\nI0416 04:22:08.167898 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:08.201011 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.205736ms\"\nI0416 04:22:10.543905 1 service.go:301] Service services-6763/affinity-clusterip updated: 1 ports\nI0416 04:22:10.543950 1 service.go:416] Adding new service port \"services-6763/affinity-clusterip\" at 100.67.66.248:80/TCP\nI0416 04:22:10.544572 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:10.587249 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"43.298012ms\"\nI0416 04:22:10.587391 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:10.618324 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.041779ms\"\nI0416 04:22:11.696421 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:11.746818 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.483815ms\"\nI0416 04:22:12.747765 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:12.776111 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.421966ms\"\nI0416 04:22:13.776895 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:13.809773 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.960946ms\"\nI0416 04:22:15.276874 1 service.go:301] Service webhook-4171/e2e-test-webhook updated: 0 ports\nI0416 04:22:15.277166 1 service.go:441] Removing service port \"webhook-4171/e2e-test-webhook\"\nI0416 04:22:15.277341 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:15.311170 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.003013ms\"\nI0416 04:22:16.311806 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:16.342213 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.481293ms\"\nI0416 04:22:16.684899 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:16.724789 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.942476ms\"\nI0416 04:22:28.793943 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 1 ports\nI0416 04:22:28.793985 1 service.go:416] Adding new service port \"services-8414/affinity-clusterip-transition\" at 100.69.172.86:80/TCP\nI0416 04:22:28.794532 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:28.826222 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.23583ms\"\nI0416 04:22:28.826387 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:28.853254 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.006894ms\"\nI0416 04:22:30.850487 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:30.884533 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.175025ms\"\nI0416 04:22:31.456435 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:31.499846 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"43.576446ms\"\nI0416 04:22:32.206295 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:32.236630 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.426886ms\"\nI0416 04:22:37.724831 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:37.769111 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"44.418463ms\"\nI0416 04:22:37.769264 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:37.805461 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.305242ms\"\nI0416 04:22:39.280801 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:39.385957 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"105.280291ms\"\nI0416 04:22:39.887451 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:39.923546 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.223518ms\"\nI0416 04:22:40.737965 1 service.go:301] Service services-6763/affinity-clusterip updated: 0 ports\nI0416 04:22:40.738419 1 service.go:441] Removing service port \"services-6763/affinity-clusterip\"\nI0416 04:22:40.738701 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:40.774124 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.701536ms\"\nI0416 04:22:41.774866 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:41.801247 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.511267ms\"\nI0416 04:22:43.566973 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 1 ports\nI0416 04:22:43.567148 1 service.go:418] Updating existing service port \"services-8414/affinity-clusterip-transition\" at 100.69.172.86:80/TCP\nI0416 04:22:43.567403 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:43.597200 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.090037ms\"\nI0416 04:22:53.254830 1 service.go:301] Service webhook-4199/e2e-test-webhook updated: 1 ports\nI0416 04:22:53.254942 1 service.go:416] Adding new service port \"webhook-4199/e2e-test-webhook\" at 100.71.83.135:8443/TCP\nI0416 04:22:53.255127 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:53.286653 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.708142ms\"\nI0416 04:22:53.286897 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:53.325457 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.765674ms\"\nI0416 04:22:59.421420 1 service.go:301] Service webhook-4199/e2e-test-webhook updated: 0 ports\nI0416 04:22:59.421457 1 service.go:441] Removing service port \"webhook-4199/e2e-test-webhook\"\nI0416 04:22:59.421614 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:59.470227 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.74635ms\"\nI0416 04:22:59.470410 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:59.520569 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.309722ms\"\nI0416 04:23:00.938538 1 service.go:301] Service services-359/nodeport-collision-1 updated: 1 ports\nI0416 04:23:00.938754 1 service.go:416] Adding new service port \"services-359/nodeport-collision-1\" at 100.67.105.134:80/TCP\nI0416 04:23:00.938965 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:00.965850 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-359/nodeport-collision-1\\\" (:32446/tcp4)\"\nI0416 04:23:00.969719 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.971429ms\"\nI0416 04:23:01.421013 1 service.go:301] Service services-359/nodeport-collision-1 updated: 0 ports\nI0416 04:23:01.430903 1 service.go:441] Removing service port \"services-359/nodeport-collision-1\"\nI0416 04:23:01.431169 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:01.462997 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.092419ms\"\nI0416 04:23:01.681602 1 service.go:301] Service services-359/nodeport-collision-2 updated: 1 ports\nI0416 04:23:02.464114 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:02.492569 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.583567ms\"\nI0416 04:23:35.329030 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:35.393832 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"64.852878ms\"\nI0416 04:23:39.378331 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:39.405359 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.148307ms\"\nI0416 04:23:40.382482 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:40.435078 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"52.749231ms\"\nI0416 04:23:43.479702 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:43.505584 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.014874ms\"\nI0416 04:23:43.686367 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:43.712773 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.529865ms\"\nI0416 04:23:44.714070 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:44.744680 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.800098ms\"\nI0416 04:23:44.915226 1 service.go:301] Service services-9070/affinity-clusterip-timeout updated: 0 ports\nI0416 04:23:45.745559 1 service.go:441] Removing service port \"services-9070/affinity-clusterip-timeout\"\nI0416 04:23:45.745784 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:45.778112 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.535725ms\"\nI0416 04:24:02.850698 1 service.go:301] Service services-5577/nodeport-service updated: 1 ports\nI0416 04:24:02.850937 1 service.go:416] Adding new service port \"services-5577/nodeport-service\" at 100.67.230.241:80/TCP\nI0416 04:24:02.851306 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:02.880690 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-5577/nodeport-service\\\" (:32701/tcp4)\"\nI0416 04:24:02.884390 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.622999ms\"\nI0416 04:24:02.884709 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:02.910687 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.268313ms\"\nI0416 04:24:03.087448 1 service.go:301] Service services-5577/externalsvc updated: 1 ports\nI0416 04:24:03.911222 1 service.go:416] Adding new service port \"services-5577/externalsvc\" at 100.69.208.142:80/TCP\nI0416 04:24:03.911359 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:03.942384 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.175701ms\"\nI0416 04:24:11.982382 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:12.010405 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.135943ms\"\nI0416 04:24:21.865377 1 service.go:301] Service crd-webhook-3479/e2e-test-crd-conversion-webhook updated: 1 ports\nI0416 04:24:21.865421 1 service.go:416] Adding new service port \"crd-webhook-3479/e2e-test-crd-conversion-webhook\" at 100.66.12.242:9443/TCP\nI0416 04:24:21.866314 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:21.908608 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"43.18249ms\"\nI0416 04:24:21.908811 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:21.955720 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.032352ms\"\nI0416 04:24:24.102234 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:24.170050 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"67.924235ms\"\nI0416 04:24:25.293902 1 service.go:301] Service services-5577/nodeport-service updated: 0 ports\nI0416 04:24:25.294066 1 service.go:441] Removing service port \"services-5577/nodeport-service\"\nI0416 04:24:25.294305 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:25.328562 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.6164ms\"\nI0416 04:24:25.328704 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:25.365801 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.189171ms\"\nI0416 04:24:29.302850 1 service.go:301] Service crd-webhook-3479/e2e-test-crd-conversion-webhook updated: 0 ports\nI0416 04:24:29.302888 1 service.go:441] Removing service port \"crd-webhook-3479/e2e-test-crd-conversion-webhook\"\nI0416 04:24:29.303487 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:29.357047 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"54.150161ms\"\nI0416 04:24:29.357225 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:29.393602 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.522359ms\"\nI0416 04:24:35.184854 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:35.237447 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"52.66963ms\"\nI0416 04:24:35.237593 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:35.267259 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.776994ms\"\nI0416 04:24:39.805900 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:39.837651 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.856046ms\"\nI0416 04:24:40.213004 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:40.262214 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.466282ms\"\nI0416 04:24:40.980479 1 service.go:301] Service services-5577/externalsvc updated: 0 ports\nI0416 04:24:40.981158 1 service.go:441] Removing service port \"services-5577/externalsvc\"\nI0416 04:24:40.981333 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:41.015661 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.498783ms\"\nI0416 04:24:42.016833 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:42.081452 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"64.715912ms\"\nI0416 04:25:00.683140 1 service.go:301] Service webhook-8589/e2e-test-webhook updated: 1 ports\nI0416 04:25:00.683389 1 service.go:416] Adding new service port \"webhook-8589/e2e-test-webhook\" at 100.71.236.143:8443/TCP\nI0416 04:25:00.683594 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:00.717494 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.303006ms\"\nI0416 04:25:00.717706 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:00.748138 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.588333ms\"\nI0416 04:25:02.969140 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 1 ports\nI0416 04:25:02.969180 1 service.go:416] Adding new service port \"services-2658/affinity-nodeport-transition\" at 100.66.195.203:80/TCP\nI0416 04:25:02.969407 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:03.003176 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2658/affinity-nodeport-transition\\\" (:32112/tcp4)\"\nI0416 04:25:03.006617 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.434978ms\"\nI0416 04:25:03.006827 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:03.041513 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.86594ms\"\nI0416 04:25:03.571975 1 service.go:301] Service webhook-8589/e2e-test-webhook updated: 0 ports\nI0416 04:25:04.042460 1 service.go:441] Removing service port \"webhook-8589/e2e-test-webhook\"\nI0416 04:25:04.042642 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:04.073122 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.665736ms\"\nI0416 04:25:05.073632 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:05.124518 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.976326ms\"\nI0416 04:25:05.187073 1 service.go:301] Service webhook-8648/e2e-test-webhook updated: 1 ports\nI0416 04:25:05.354972 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 1 ports\nI0416 04:25:06.125434 1 service.go:418] Updating existing service port \"services-8414/affinity-clusterip-transition\" at 100.69.172.86:80/TCP\nI0416 04:25:06.125471 1 service.go:416] Adding new service port \"webhook-8648/e2e-test-webhook\" at 100.70.43.41:8443/TCP\nI0416 04:25:06.125812 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:06.160599 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.186008ms\"\nI0416 04:25:07.571107 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:07.618870 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.892417ms\"\nI0416 04:25:12.332512 1 service.go:301] Service webhook-8648/e2e-test-webhook updated: 0 ports\nI0416 04:25:12.332691 1 service.go:441] Removing service port \"webhook-8648/e2e-test-webhook\"\nI0416 04:25:12.332884 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:12.387049 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"54.34882ms\"\nI0416 04:25:12.387252 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:12.449479 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"62.395879ms\"\nI0416 04:25:13.360085 1 service.go:301] Service webhook-82/e2e-test-webhook updated: 1 ports\nI0416 04:25:13.360175 1 service.go:416] Adding new service port \"webhook-82/e2e-test-webhook\" at 100.70.200.135:8443/TCP\nI0416 04:25:13.360323 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:13.405370 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.233641ms\"\nI0416 04:25:14.405719 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:14.440408 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.783126ms\"\nI0416 04:25:18.562420 1 service.go:301] Service webhook-82/e2e-test-webhook updated: 0 ports\nI0416 04:25:18.562517 1 service.go:441] Removing service port \"webhook-82/e2e-test-webhook\"\nI0416 04:25:18.562699 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:18.620985 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"58.457225ms\"\nI0416 04:25:18.621246 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:18.674506 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.404467ms\"\nI0416 04:25:23.910273 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 1 ports\nI0416 04:25:23.910392 1 service.go:418] Updating existing service port \"services-2658/affinity-nodeport-transition\" at 100.66.195.203:80/TCP\nI0416 04:25:23.910552 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:23.962205 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.813526ms\"\nI0416 04:25:25.438728 1 service.go:301] Service webhook-6678/e2e-test-webhook updated: 1 ports\nI0416 04:25:25.438901 1 service.go:416] Adding new service port \"webhook-6678/e2e-test-webhook\" at 100.69.157.77:8443/TCP\nI0416 04:25:25.439116 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:25.475153 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.383361ms\"\nI0416 04:25:25.475292 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:25.504608 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.424883ms\"\nI0416 04:25:26.228655 1 service.go:301] Service dns-6992/test-service-2 updated: 1 ports\nI0416 04:25:26.505199 1 service.go:416] Adding new service port \"dns-6992/test-service-2:http\" at 100.71.247.254:80/TCP\nI0416 04:25:26.505444 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:26.561262 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.089477ms\"\nI0416 04:25:26.872335 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 1 ports\nI0416 04:25:27.561656 1 service.go:418] Updating existing service port \"services-2658/affinity-nodeport-transition\" at 100.66.195.203:80/TCP\nI0416 04:25:27.561978 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:27.589107 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.509144ms\"\nI0416 04:25:28.570067 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:28.606414 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.4512ms\"\nI0416 04:25:30.361554 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:30.398873 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.553927ms\"\nI0416 04:25:31.655155 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:31.693307 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.260337ms\"\nI0416 04:25:31.751616 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:31.770662 1 service.go:301] Service webhook-6678/e2e-test-webhook updated: 0 ports\nI0416 04:25:31.796496 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"44.95568ms\"\nI0416 04:25:32.287151 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 0 ports\nI0416 04:25:32.796615 1 service.go:441] Removing service port \"webhook-6678/e2e-test-webhook\"\nI0416 04:25:32.796719 1 service.go:441] Removing service port \"services-2658/affinity-nodeport-transition\"\nI0416 04:25:32.796895 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:32.827506 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.897237ms\"\nI0416 04:25:35.110053 1 service.go:301] Service webhook-8817/e2e-test-webhook updated: 1 ports\nI0416 04:25:35.110098 1 service.go:416] Adding new service port \"webhook-8817/e2e-test-webhook\" at 100.67.61.52:8443/TCP\nI0416 04:25:35.110210 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:35.149516 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.415591ms\"\nI0416 04:25:35.149696 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:35.185490 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.944553ms\"\nI0416 04:25:40.256895 1 service.go:301] Service webhook-8817/e2e-test-webhook updated: 0 ports\nI0416 04:25:40.257180 1 service.go:441] Removing service port \"webhook-8817/e2e-test-webhook\"\nI0416 04:25:40.257723 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:40.297476 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.326861ms\"\nI0416 04:25:40.297635 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:40.324900 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.394713ms\"\nI0416 04:26:07.869148 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:07.928857 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"59.811689ms\"\nI0416 04:26:07.929097 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:07.971793 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.840254ms\"\nI0416 04:26:08.101330 1 service.go:301] Service dns-6992/test-service-2 updated: 0 ports\nI0416 04:26:08.974268 1 service.go:441] Removing service port \"dns-6992/test-service-2:http\"\nI0416 04:26:08.974768 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:09.025486 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.23648ms\"\nI0416 04:26:36.422024 1 service.go:301] Service services-2800/nodeport-reuse updated: 1 ports\nI0416 04:26:36.422145 1 service.go:416] Adding new service port \"services-2800/nodeport-reuse\" at 100.71.109.134:80/TCP\nI0416 04:26:36.422594 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:36.469738 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2800/nodeport-reuse\\\" (:30893/tcp4)\"\nI0416 04:26:36.475016 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"52.871677ms\"\nI0416 04:26:36.475489 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:36.536505 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"61.134968ms\"\nI0416 04:26:36.658047 1 service.go:301] Service services-2800/nodeport-reuse updated: 0 ports\nI0416 04:26:37.536671 1 service.go:441] Removing service port \"services-2800/nodeport-reuse\"\nI0416 04:26:37.536810 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:37.566512 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.844103ms\"\nI0416 04:26:43.971398 1 service.go:301] Service services-2800/nodeport-reuse updated: 1 ports\nI0416 04:26:43.971439 1 service.go:416] Adding new service port \"services-2800/nodeport-reuse\" at 100.68.243.209:80/TCP\nI0416 04:26:43.971735 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:44.000984 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2800/nodeport-reuse\\\" (:30893/tcp4)\"\nI0416 04:26:44.007662 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.220017ms\"\nI0416 04:26:44.007817 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:44.035774 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.082303ms\"\nI0416 04:26:44.207542 1 service.go:301] Service services-2800/nodeport-reuse updated: 0 ports\nI0416 04:26:45.035972 1 service.go:441] Removing service port \"services-2800/nodeport-reuse\"\nI0416 04:26:45.036190 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:45.083510 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.540137ms\"\nI0416 04:26:47.043274 1 service.go:301] Service services-3168/externalname-service updated: 1 ports\nI0416 04:26:47.043315 1 service.go:416] Adding new service port \"services-3168/externalname-service:http\" at 100.67.116.29:80/TCP\nI0416 04:26:47.043901 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:47.101188 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"57.866622ms\"\nI0416 04:26:47.101466 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:47.149784 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.418746ms\"\nI0416 04:26:49.300592 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:49.328845 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.415335ms\"\nI0416 04:26:49.963625 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:50.014413 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.890243ms\"\nI0416 04:27:36.817348 1 service.go:301] Service services-3168/externalname-service updated: 0 ports\nI0416 04:27:36.817630 1 service.go:441] Removing service port \"services-3168/externalname-service:http\"\nI0416 04:27:36.817868 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:36.859335 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.709657ms\"\nI0416 04:27:36.859449 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:36.891958 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.595007ms\"\nI0416 04:27:44.647023 1 service.go:301] Service webhook-1600/e2e-test-webhook updated: 1 ports\nI0416 04:27:44.647152 1 service.go:416] Adding new service port \"webhook-1600/e2e-test-webhook\" at 100.68.126.30:8443/TCP\nI0416 04:27:44.647314 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:44.702683 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.528654ms\"\nI0416 04:27:44.702898 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:44.758545 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.767944ms\"\nI0416 04:27:44.825586 1 service.go:301] Service conntrack-2375/svc-udp updated: 1 ports\nI0416 04:27:45.758763 1 service.go:416] Adding new service port \"conntrack-2375/svc-udp:udp\" at 100.65.237.14:80/UDP\nI0416 04:27:45.758884 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:45.786179 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for conntrack-2375/svc-udp:udp\\\" (:30651/udp4)\"\nI0416 04:27:45.789878 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.131406ms\"\nI0416 04:27:47.815288 1 service.go:301] Service webhook-1600/e2e-test-webhook updated: 0 ports\nI0416 04:27:47.815899 1 service.go:441] Removing service port \"webhook-1600/e2e-test-webhook\"\nI0416 04:27:47.816116 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:47.854991 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.088633ms\"\nI0416 04:27:47.855116 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:47.890958 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.937089ms\"\nI0416 04:27:56.652270 1 proxier.go:830] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-2375/svc-udp:udp\" clusterIP=\"100.65.237.14\"\nI0416 04:27:56.652337 1 proxier.go:840] Stale udp service NodePort conntrack-2375/svc-udp:udp -> 30651\nI0416 04:27:56.652362 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:56.704687 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"52.569639ms\"\nI0416 04:28:02.166585 1 service.go:301] Service conntrack-2986/boom-server updated: 1 ports\nI0416 04:28:02.167256 1 service.go:416] Adding new service port \"conntrack-2986/boom-server\" at 100.66.119.142:9000/TCP\nI0416 04:28:02.167988 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:02.198794 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.548621ms\"\nI0416 04:28:02.198922 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:02.229032 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.199527ms\"\nI0416 04:28:15.856736 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:15.903212 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"46.579837ms\"\nI0416 04:28:17.589526 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:17.637368 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.94604ms\"\nI0416 04:28:17.637531 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:17.672878 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.478837ms\"\nI0416 04:28:20.174238 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:20.246310 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"72.17267ms\"\nI0416 04:28:20.246493 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:20.294690 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.346811ms\"\nI0416 04:28:21.667708 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:21.712761 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.164312ms\"\nI0416 04:28:22.713649 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:22.746733 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.165876ms\"\nI0416 04:28:23.640354 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:23.669342 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.096145ms\"\nI0416 04:28:24.296776 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 0 ports\nI0416 04:28:24.296809 1 service.go:441] Removing service port \"services-8414/affinity-clusterip-transition\"\nI0416 04:28:24.296917 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:24.335998 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.182564ms\"\nI0416 04:28:25.336173 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:25.375292 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.178854ms\"\nI0416 04:28:31.633454 1 service.go:301] Service webhook-4004/e2e-test-webhook updated: 1 ports\nI0416 04:28:31.634115 1 service.go:416] Adding new service port \"webhook-4004/e2e-test-webhook\" at 100.70.223.98:8443/TCP\nI0416 04:28:31.634349 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:31.676339 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.234726ms\"\nI0416 04:28:31.676518 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:31.702752 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.383771ms\"\nI0416 04:28:34.529085 1 service.go:301] Service webhook-7035/e2e-test-webhook updated: 1 ports\nI0416 04:28:34.529309 1 service.go:416] Adding new service port \"webhook-7035/e2e-test-webhook\" at 100.65.14.118:8443/TCP\nI0416 04:28:34.529516 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:34.570576 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.275293ms\"\nI0416 04:28:34.570755 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:34.626038 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.428294ms\"\nI0416 04:28:35.013772 1 service.go:301] Service conntrack-2375/svc-udp updated: 0 ports\nI0416 04:28:35.626161 1 service.go:441] Removing service port \"conntrack-2375/svc-udp:udp\"\nI0416 04:28:35.626291 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:35.662332 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.177246ms\"\nI0416 04:28:37.685890 1 service.go:301] Service webhook-7035/e2e-test-webhook updated: 0 ports\nI0416 04:28:37.685925 1 service.go:441] Removing service port \"webhook-7035/e2e-test-webhook\"\nI0416 04:28:37.686069 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:37.722366 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.434257ms\"\nI0416 04:28:37.722527 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:37.747618 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"25.222445ms\"\nI0416 04:28:37.755613 1 service.go:301] Service webhook-4004/e2e-test-webhook updated: 0 ports\nI0416 04:28:38.747748 1 service.go:441] Removing service port \"webhook-4004/e2e-test-webhook\"\nI0416 04:28:38.748295 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:38.796737 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.99424ms\"\nI0416 04:28:48.831734 1 service.go:301] Service services-7969/clusterip-service updated: 1 ports\nI0416 04:28:48.831779 1 service.go:416] Adding new service port \"services-7969/clusterip-service\" at 100.71.97.64:80/TCP\nI0416 04:28:48.831997 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:48.861605 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.825205ms\"\nI0416 04:28:48.861715 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:48.886221 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"24.589811ms\"\nI0416 04:28:49.076751 1 service.go:301] Service services-7969/externalsvc updated: 1 ports\nI0416 04:28:49.887338 1 service.go:416] Adding new service port \"services-7969/externalsvc\" at 100.69.229.108:80/TCP\nI0416 04:28:49.887501 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:49.938191 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.866697ms\"\nI0416 04:28:50.938678 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:50.965781 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.180085ms\"\nI0416 04:28:51.966289 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:52.014322 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.161312ms\"\nI0416 04:28:53.288493 1 service.go:301] Service services-7969/clusterip-service updated: 0 ports\nI0416 04:28:53.288529 1 service.go:441] Removing service port \"services-7969/clusterip-service\"\nI0416 04:28:53.289361 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:53.338333 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.789654ms\"\nI0416 04:28:54.338491 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:54.363791 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"25.374594ms\"\nI0416 04:29:01.278445 1 service.go:301] Service webhook-1322/e2e-test-webhook updated: 1 ports\nI0416 04:29:01.278693 1 service.go:416] Adding new service port \"webhook-1322/e2e-test-webhook\" at 100.71.177.220:8443/TCP\nI0416 04:29:01.278942 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:01.373892 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"95.215992ms\"\nI0416 04:29:01.374134 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:01.416114 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.108168ms\"\nI0416 04:29:03.110199 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:03.145852 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.765118ms\"\nI0416 04:29:04.115185 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:04.160789 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.699143ms\"\nI0416 04:29:05.417756 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:05.471041 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.389288ms\"\nI0416 04:29:06.095540 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:06.127075 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.624085ms\"\nI0416 04:29:06.540082 1 service.go:301] Service services-7969/externalsvc updated: 0 ports\nI0416 04:29:06.540115 1 service.go:441] Removing service port \"services-7969/externalsvc\"\nI0416 04:29:06.541519 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:06.585548 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.426331ms\"\nI0416 04:29:07.586697 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:07.613976 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.346704ms\"\nI0416 04:29:08.282134 1 service.go:301] Service deployment-9343/test-rolling-update-with-lb updated: 1 ports\nI0416 04:29:08.296487 1 service.go:301] Service deployment-9343/test-rolling-update-with-lb updated: 1 ports\nI0416 04:29:08.614954 1 service.go:416] Adding new service port \"deployment-9343/test-rolling-update-with-lb\" at 100.71.170.203:80/TCP\nI0416 04:29:08.615168 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:08.638631 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for deployment-9343/test-rolling-update-with-lb\\\" (:31243/tcp4)\"\nI0416 04:29:08.644068 1 service_health.go:98] Opening healthcheck \"deployment-9343/test-rolling-update-with-lb\" on port 30608\nI0416 04:29:08.644223 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.283166ms\"\nI0416 04:29:13.624124 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:13.672587 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.557172ms\"\nI0416 04:29:13.672915 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:13.677833 1 service.go:301] Service conntrack-2986/boom-server updated: 0 ports\nI0416 04:29:13.720971 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.345372ms\"\nI0416 04:29:14.722083 1 service.go:441] Removing service port \"conntrack-2986/boom-server\"\nI0416 04:29:14.722248 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:14.751217 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.135732ms\"\nI0416 04:29:16.885259 1 service.go:301] Service webhook-1322/e2e-test-webhook updated: 0 ports\nI0416 04:29:16.885388 1 service.go:441] Removing service port \"webhook-1322/e2e-test-webhook\"\nI0416 04:29:16.885658 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:16.918832 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.437787ms\"\nI0416 04:29:16.918978 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:16.964539 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.670317ms\"\nI0416 04:29:26.527851 1 service.go:301] Service dns-9391/test-service-2 updated: 1 ports\nI0416 04:29:26.527908 1 service.go:416] Adding new service port \"dns-9391/test-service-2:http\" at 100.70.22.117:80/TCP\nI0416 04:29:26.528021 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:26.558712 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.817715ms\"\nI0416 04:29:26.558986 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:26.590891 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.997763ms\"\nI0416 04:29:32.639127 1 service.go:301] Service endpointslicemirroring-5414/example-custom-endpoints updated: 1 ports\nI0416 04:29:32.639169 1 service.go:416] Adding new service port \"endpointslicemirroring-5414/example-custom-endpoints:example\" at 100.70.236.22:80/TCP\nI0416 04:29:32.639375 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:32.690864 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.695008ms\"\nI0416 04:29:32.881878 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:32.919140 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.367843ms\"\nI0416 04:29:33.838522 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:33.876982 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.542324ms\"\nI0416 04:29:39.574758 1 service.go:301] Service endpointslicemirroring-5414/example-custom-endpoints updated: 0 ports\nI0416 04:29:39.574947 1 service.go:441] Removing service port \"endpointslicemirroring-5414/example-custom-endpoints:example\"\nI0416 04:29:39.575140 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:39.626116 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.157029ms\"\nI0416 04:29:42.795110 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:42.824325 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.307596ms\"\nI0416 04:29:59.834853 1 service.go:301] Service services-6926/service-proxy-toggled updated: 1 ports\nI0416 04:29:59.835116 1 service.go:416] Adding new service port \"services-6926/service-proxy-toggled\" at 100.67.81.8:80/TCP\nI0416 04:29:59.835348 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:59.879318 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"44.210139ms\"\nI0416 04:29:59.879497 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:59.909193 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.844152ms\"\nI0416 04:30:03.054619 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:03.084100 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.561297ms\"\nI0416 04:30:03.911521 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:03.953651 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.236964ms\"\nI0416 04:30:04.309703 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:04.338605 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.021135ms\"\nI0416 04:30:05.317537 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:05.344250 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.814228ms\"\nI0416 04:30:06.345054 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:06.373160 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.24776ms\"\nI0416 04:30:07.374190 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:07.401586 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.49203ms\"\nI0416 04:30:08.202202 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:08.234702 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.608523ms\"\nI0416 04:30:08.431568 1 service.go:301] Service dns-9391/test-service-2 updated: 0 ports\nI0416 04:30:09.234825 1 service.go:441] Removing service port \"dns-9391/test-service-2:http\"\nI0416 04:30:09.234939 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:09.264442 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.617767ms\"\nI0416 04:30:10.010205 1 service.go:301] Service services-6780/service-headless-toggled updated: 1 ports\nI0416 04:30:10.265177 1 service.go:416] Adding new service port \"services-6780/service-headless-toggled\" at 100.67.193.44:80/TCP\nI0416 04:30:10.265435 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:10.313786 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.629322ms\"\nI0416 04:30:11.593694 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:11.628409 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.814373ms\"\nI0416 04:30:14.417648 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:14.456334 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.784797ms\"\nI0416 04:30:14.628189 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:14.657900 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.801887ms\"\nI0416 04:30:35.291584 1 service.go:301] Service webhook-2667/e2e-test-webhook updated: 1 ports\nI0416 04:30:35.291803 1 service.go:416] Adding new service port \"webhook-2667/e2e-test-webhook\" at 100.65.57.14:8443/TCP\nI0416 04:30:35.292050 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:35.326900 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.273861ms\"\nI0416 04:30:35.327068 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:35.356916 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.988339ms\"\nI0416 04:30:39.885948 1 service.go:301] Service webhook-2667/e2e-test-webhook updated: 0 ports\nI0416 04:30:39.886132 1 service.go:441] Removing service port \"webhook-2667/e2e-test-webhook\"\nI0416 04:30:39.886309 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:39.919012 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.884504ms\"\nI0416 04:30:39.919195 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:39.948560 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.51045ms\"\nI0416 04:30:49.842701 1 service.go:301] Service services-4420/nodeport-test updated: 1 ports\nI0416 04:30:49.842927 1 service.go:416] Adding new service port \"services-4420/nodeport-test:http\" at 100.68.65.94:80/TCP\nI0416 04:30:49.843667 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:49.878900 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-4420/nodeport-test:http\\\" (:30134/tcp4)\"\nI0416 04:30:49.882807 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.903406ms\"\nI0416 04:30:49.882957 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:49.910305 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.466712ms\"\nI0416 04:30:51.859350 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:51.896861 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.639634ms\"\nI0416 04:30:53.431880 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:30:53.502383 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"70.563865ms\"\nI0416 04:31:01.530323 1 service.go:301] Service crd-webhook-8947/e2e-test-crd-conversion-webhook updated: 1 ports\nI0416 04:31:01.530433 1 service.go:416] Adding new service port \"crd-webhook-8947/e2e-test-crd-conversion-webhook\" at 100.66.196.68:9443/TCP\nI0416 04:31:01.530569 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:01.561262 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.830199ms\"\nI0416 04:31:01.561389 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:01.601391 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.096074ms\"\nI0416 04:31:07.918074 1 service.go:301] Service crd-webhook-8947/e2e-test-crd-conversion-webhook updated: 0 ports\nI0416 04:31:07.918109 1 service.go:441] Removing service port \"crd-webhook-8947/e2e-test-crd-conversion-webhook\"\nI0416 04:31:07.918295 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:07.959989 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.871477ms\"\nI0416 04:31:07.960140 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:07.992301 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.280915ms\"\nI0416 04:31:24.903985 1 service.go:301] Service webhook-7310/e2e-test-webhook updated: 1 ports\nI0416 04:31:24.904230 1 service.go:416] Adding new service port \"webhook-7310/e2e-test-webhook\" at 100.68.212.102:8443/TCP\nI0416 04:31:24.904408 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:24.941706 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.500412ms\"\nI0416 04:31:24.941889 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:24.969951 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.217693ms\"\nI0416 04:31:32.716317 1 service.go:301] Service webhook-7310/e2e-test-webhook updated: 0 ports\nI0416 04:31:32.716869 1 service.go:441] Removing service port \"webhook-7310/e2e-test-webhook\"\nI0416 04:31:32.717327 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:32.752216 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.352917ms\"\nI0416 04:31:32.752522 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:32.780407 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.045292ms\"\nI0416 04:31:33.090958 1 service.go:301] Service webhook-6874/e2e-test-webhook updated: 1 ports\nI0416 04:31:33.780560 1 service.go:416] Adding new service port \"webhook-6874/e2e-test-webhook\" at 100.71.162.175:8443/TCP\nI0416 04:31:33.780894 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:33.890787 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"110.240838ms\"\nI0416 04:31:36.738396 1 service.go:301] Service webhook-6874/e2e-test-webhook updated: 0 ports\nI0416 04:31:36.738531 1 service.go:441] Removing service port \"webhook-6874/e2e-test-webhook\"\nI0416 04:31:36.738737 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:36.788839 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.323629ms\"\nI0416 04:31:36.788993 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:36.815819 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.953337ms\"\nI0416 04:31:57.081538 1 service.go:301] Service services-4420/nodeport-test updated: 0 ports\nI0416 04:31:57.082044 1 service.go:441] Removing service port \"services-4420/nodeport-test:http\"\nI0416 04:31:57.082550 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:57.142002 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"59.945707ms\"\nI0416 04:31:57.142280 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:57.193486 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.313755ms\"\nI0416 04:31:58.744283 1 service.go:301] Service services-5254/affinity-nodeport-timeout updated: 1 ports\nI0416 04:31:58.744765 1 service.go:416] Adding new service port \"services-5254/affinity-nodeport-timeout\" at 100.66.92.70:80/TCP\nI0416 04:31:58.744960 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:58.774750 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-5254/affinity-nodeport-timeout\\\" (:31800/tcp4)\"\nI0416 04:31:58.784531 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.199927ms\"\nI0416 04:31:59.784963 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:31:59.815993 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.083496ms\"\nI0416 04:32:01.983581 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:02.055778 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"72.257669ms\"\nI0416 04:32:02.606616 1 service.go:301] Service services-3684/up-down-1 updated: 1 ports\nI0416 04:32:02.606792 1 service.go:416] Adding new service port \"services-3684/up-down-1\" at 100.65.197.198:80/TCP\nI0416 04:32:02.606996 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:02.641296 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.544094ms\"\nI0416 04:32:03.641877 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:03.669533 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.720334ms\"\nI0416 04:32:03.997221 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:04.045179 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.067586ms\"\nI0416 04:32:05.045697 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:05.087817 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.231461ms\"\nI0416 04:32:06.089339 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:06.175398 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"86.171179ms\"\nI0416 04:32:07.868641 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:07.905165 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.617007ms\"\nI0416 04:32:09.266167 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:09.308857 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.786964ms\"\nI0416 04:32:10.674444 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:10.760871 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"86.570199ms\"\nI0416 04:32:10.761089 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:10.792954 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.050578ms\"\nI0416 04:32:11.372273 1 service.go:301] Service webhook-5919/e2e-test-webhook updated: 1 ports\nI0416 04:32:11.793490 1 service.go:416] Adding new service port \"webhook-5919/e2e-test-webhook\" at 100.66.114.125:8443/TCP\nI0416 04:32:11.793750 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:11.827349 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.897897ms\"\nI0416 04:32:12.578181 1 service.go:301] Service services-3684/up-down-2 updated: 1 ports\nI0416 04:32:12.827598 1 service.go:416] Adding new service port \"services-3684/up-down-2\" at 100.67.201.88:80/TCP\nI0416 04:32:12.827937 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:12.898773 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"71.193324ms\"\nI0416 04:32:14.227095 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:14.269521 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.524885ms\"\nI0416 04:32:14.542019 1 service.go:301] Service webhook-5919/e2e-test-webhook updated: 0 ports\nI0416 04:32:15.269660 1 service.go:441] Removing service port \"webhook-5919/e2e-test-webhook\"\nI0416 04:32:15.270052 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:15.332567 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"62.919484ms\"\nI0416 04:32:17.361232 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:17.389752 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.638362ms\"\nI0416 04:32:17.869638 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:17.900880 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.34734ms\"\nI0416 04:32:19.070769 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:19.128116 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"57.457003ms\"\nI0416 04:32:20.123467 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:20.226360 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"102.999987ms\"\nI0416 04:32:21.226576 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:21.285544 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"59.057967ms\"\nI0416 04:32:22.286790 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:22.315149 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.535193ms\"\nI0416 04:32:31.272662 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:31.355210 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"82.617343ms\"\nI0416 04:32:31.355473 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:31.385766 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.383319ms\"\nI0416 04:32:33.248264 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:33.334173 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"86.015678ms\"\nI0416 04:32:33.334538 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:33.370892 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.686476ms\"\nI0416 04:32:35.331376 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:35.372114 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.842608ms\"\nI0416 04:32:35.372362 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:35.411619 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.359926ms\"\nI0416 04:32:36.412141 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:36.440239 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.169616ms\"\nI0416 04:32:49.318851 1 service.go:301] Service services-5268/affinity-nodeport updated: 1 ports\nI0416 04:32:49.323060 1 service.go:416] Adding new service port \"services-5268/affinity-nodeport\" at 100.67.128.77:80/TCP\nI0416 04:32:49.324689 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:49.371300 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-5268/affinity-nodeport\\\" (:30071/tcp4)\"\nI0416 04:32:49.379116 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.086699ms\"\nI0416 04:32:49.379294 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:49.428216 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.018756ms\"\nI0416 04:32:51.709271 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:51.769008 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"59.842285ms\"\nI0416 04:32:54.499312 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:54.556516 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"57.311865ms\"\nI0416 04:32:58.961648 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:32:58.994932 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.387617ms\"\nI0416 04:33:18.319961 1 service.go:301] Service pods-2391/fooservice updated: 1 ports\nI0416 04:33:18.320005 1 service.go:416] Adding new service port \"pods-2391/fooservice\" at 100.71.250.252:8765/TCP\nI0416 04:33:18.321109 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:18.370841 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.831097ms\"\nI0416 04:33:18.371102 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:18.452342 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"81.462606ms\"\nI0416 04:33:18.776779 1 service.go:301] Service services-1477/sourceip-test updated: 1 ports\nI0416 04:33:19.453212 1 service.go:416] Adding new service port \"services-1477/sourceip-test\" at 100.69.219.85:8080/TCP\nI0416 04:33:19.453438 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:19.558783 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"105.5797ms\"\nI0416 04:33:20.693172 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:20.732031 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.919877ms\"\nI0416 04:33:21.512983 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:21.556977 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"44.10836ms\"\nI0416 04:33:22.558042 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:22.589840 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.88926ms\"\nI0416 04:33:23.700562 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:23.753234 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"52.818441ms\"\nI0416 04:33:24.703208 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:24.770383 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"67.285601ms\"\nI0416 04:33:25.770945 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:25.801701 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.830964ms\"\nI0416 04:33:27.522288 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:27.582191 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"59.956654ms\"\nI0416 04:33:27.631913 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:27.683337 1 service.go:301] Service pods-2391/fooservice updated: 0 ports\nI0416 04:33:27.694015 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"62.19833ms\"\nI0416 04:33:28.694148 1 service.go:441] Removing service port \"pods-2391/fooservice\"\nI0416 04:33:28.694283 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:28.729985 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.839945ms\"\nI0416 04:33:31.360105 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:31.388585 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.568037ms\"\nI0416 04:33:31.561675 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:31.590214 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.640471ms\"\nI0416 04:33:31.838017 1 service.go:301] Service services-5268/affinity-nodeport updated: 0 ports\nI0416 04:33:32.590797 1 service.go:441] Removing service port \"services-5268/affinity-nodeport\"\nI0416 04:33:32.590998 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:32.626182 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.372928ms\"\nI0416 04:33:43.196598 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:43.286199 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"89.70462ms\"\nI0416 04:33:44.780550 1 service.go:301] Service services-2553/nodeport-update-service updated: 1 ports\nI0416 04:33:44.780793 1 service.go:416] Adding new service port \"services-2553/nodeport-update-service\" at 100.65.167.104:80/TCP\nI0416 04:33:44.781002 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:44.817065 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.304864ms\"\nI0416 04:33:44.817282 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:44.844783 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.68995ms\"\nI0416 04:33:45.260407 1 service.go:301] Service services-2553/nodeport-update-service updated: 1 ports\nI0416 04:33:45.845505 1 service.go:416] Adding new service port \"services-2553/nodeport-update-service:tcp-port\" at 100.65.167.104:80/TCP\nI0416 04:33:45.845526 1 service.go:441] Removing service port \"services-2553/nodeport-update-service\"\nI0416 04:33:45.845640 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:45.873502 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2553/nodeport-update-service:tcp-port\\\" (:31355/tcp4)\"\nI0416 04:33:45.878668 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.181779ms\"\nI0416 04:33:47.328095 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:47.362679 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.703151ms\"\nI0416 04:33:56.170074 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:33:56.204312 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.349919ms\"\nI0416 04:34:04.045091 1 service.go:301] Service conntrack-577/svc-udp updated: 1 ports\nI0416 04:34:04.045134 1 service.go:416] Adding new service port \"conntrack-577/svc-udp:udp\" at 100.66.164.29:80/UDP\nI0416 04:34:04.045452 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:04.087254 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.08977ms\"\nI0416 04:34:04.087485 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:04.126806 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.426293ms\"\nI0416 04:34:04.707031 1 service.go:301] Service deployment-9343/test-rolling-update-with-lb updated: 0 ports\nI0416 04:34:05.127781 1 service.go:441] Removing service port \"deployment-9343/test-rolling-update-with-lb\"\nI0416 04:34:05.127913 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:05.162084 1 service_health.go:83] Closing healthcheck \"deployment-9343/test-rolling-update-with-lb\" on port 30608\nI0416 04:34:05.162258 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.479752ms\"\nI0416 04:34:12.215451 1 service.go:301] Service services-31/endpoint-test2 updated: 1 ports\nI0416 04:34:12.215495 1 service.go:416] Adding new service port \"services-31/endpoint-test2\" at 100.66.35.15:80/TCP\nI0416 04:34:12.215689 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:12.270218 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"54.719173ms\"\nI0416 04:34:12.270328 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:12.301601 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.353238ms\"\nI0416 04:34:14.914397 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:14.983915 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"69.627715ms\"\nI0416 04:34:17.256104 1 proxier.go:830] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-577/svc-udp:udp\" clusterIP=\"100.66.164.29\"\nI0416 04:34:17.256129 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:17.289972 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.965737ms\"\nI0416 04:34:27.334897 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:27.375737 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.952539ms\"\nI0416 04:34:27.375922 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:27.408260 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.494241ms\"\nI0416 04:34:28.634312 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:28.662304 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.069068ms\"\nI0416 04:34:29.664603 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:29.701333 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.822299ms\"\nI0416 04:34:29.724689 1 service.go:301] Service services-2553/nodeport-update-service updated: 2 ports\nI0416 04:34:30.701579 1 service.go:418] Updating existing service port \"services-2553/nodeport-update-service:tcp-port\" at 100.65.167.104:80/TCP\nI0416 04:34:30.701605 1 service.go:416] Adding new service port \"services-2553/nodeport-update-service:udp-port\" at 100.65.167.104:80/UDP\nI0416 04:34:30.701832 1 proxier.go:830] \"Stale service\" protocol=\"udp\" svcPortName=\"services-2553/nodeport-update-service:udp-port\" clusterIP=\"100.65.167.104\"\nI0416 04:34:30.701895 1 proxier.go:840] Stale udp service NodePort services-2553/nodeport-update-service:udp-port -> 32020\nI0416 04:34:30.701920 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:30.728319 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2553/nodeport-update-service:udp-port\\\" (:32020/udp4)\"\nI0416 04:34:30.742708 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.144798ms\"\nI0416 04:34:31.743574 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:31.773481 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.999139ms\"\nI0416 04:34:32.773755 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:32.827445 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.80164ms\"\nI0416 04:34:33.828704 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:33.866151 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.548316ms\"\nI0416 04:34:34.345058 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:34.373802 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.861537ms\"\nI0416 04:34:34.655671 1 service.go:301] Service services-5254/affinity-nodeport-timeout updated: 0 ports\nI0416 04:34:35.373926 1 service.go:441] Removing service port \"services-5254/affinity-nodeport-timeout\"\nI0416 04:34:35.374154 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:35.408626 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.704939ms\"\nI0416 04:34:47.194908 1 service.go:301] Service services-1817/externalip-test updated: 1 ports\nI0416 04:34:47.195287 1 service.go:416] Adding new service port \"services-1817/externalip-test:http\" at 100.66.123.89:80/TCP\nI0416 04:34:47.195481 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:47.268595 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"73.637114ms\"\nI0416 04:34:47.269364 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:47.332285 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"63.646097ms\"\nI0416 04:34:49.978592 1 service.go:301] Service conntrack-577/svc-udp updated: 0 ports\nI0416 04:34:49.979169 1 service.go:441] Removing service port \"conntrack-577/svc-udp:udp\"\nI0416 04:34:49.979339 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:50.047674 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"69.036997ms\"\nI0416 04:34:50.047974 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:50.095430 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.559269ms\"\nI0416 04:34:54.075440 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:54.108595 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.258126ms\"\nI0416 04:34:58.154610 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:34:58.204341 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.840477ms\"\nI0416 04:35:06.045741 1 service.go:301] Service webhook-2422/e2e-test-webhook updated: 1 ports\nI0416 04:35:06.046284 1 service.go:416] Adding new service port \"webhook-2422/e2e-test-webhook\" at 100.68.233.192:8443/TCP\nI0416 04:35:06.046489 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:06.082896 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.622847ms\"\nI0416 04:35:06.083050 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:06.111145 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"28.219537ms\"\nI0416 04:35:12.627325 1 service.go:301] Service webhook-2422/e2e-test-webhook updated: 0 ports\nI0416 04:35:12.628000 1 service.go:441] Removing service port \"webhook-2422/e2e-test-webhook\"\nI0416 04:35:12.628213 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:12.676817 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.042254ms\"\nI0416 04:35:12.676944 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:12.704210 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.365362ms\"\nI0416 04:35:25.455074 1 service.go:301] Service resourcequota-1343/test-service updated: 1 ports\nI0416 04:35:25.455735 1 service.go:416] Adding new service port \"resourcequota-1343/test-service\" at 100.67.53.49:80/TCP\nI0416 04:35:25.455933 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:25.485084 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.366169ms\"\nI0416 04:35:25.699020 1 service.go:301] Service resourcequota-1343/test-service-np updated: 1 ports\nI0416 04:35:25.699424 1 service.go:416] Adding new service port \"resourcequota-1343/test-service-np\" at 100.66.58.108:80/TCP\nI0416 04:35:25.699586 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:25.725855 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for resourcequota-1343/test-service-np\\\" (:30171/tcp4)\"\nI0416 04:35:25.730774 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.712921ms\"\nI0416 04:35:28.412290 1 service.go:301] Service resourcequota-1343/test-service updated: 0 ports\nI0416 04:35:28.412326 1 service.go:441] Removing service port \"resourcequota-1343/test-service\"\nI0416 04:35:28.413191 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:28.449841 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.509779ms\"\nI0416 04:35:28.665036 1 service.go:301] Service resourcequota-1343/test-service-np updated: 0 ports\nI0416 04:35:28.665081 1 service.go:441] Removing service port \"resourcequota-1343/test-service-np\"\nI0416 04:35:28.665183 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:28.747530 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"82.442131ms\"\nI0416 04:35:30.726105 1 service.go:301] Service services-3993/nodeport-range-test updated: 1 ports\nI0416 04:35:30.726382 1 service.go:416] Adding new service port \"services-3993/nodeport-range-test\" at 100.66.29.148:80/TCP\nI0416 04:35:30.726868 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:30.755429 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-3993/nodeport-range-test\\\" (:30177/tcp4)\"\nI0416 04:35:30.763316 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.94157ms\"\nI0416 04:35:30.763452 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:30.790996 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"27.653624ms\"\nI0416 04:35:31.433570 1 service.go:301] Service services-3993/nodeport-range-test updated: 0 ports\nI0416 04:35:31.791803 1 service.go:441] Removing service port \"services-3993/nodeport-range-test\"\nI0416 04:35:31.792053 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:31.822926 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.131427ms\"\nI0416 04:35:38.727354 1 service.go:301] Service services-2553/nodeport-update-service updated: 0 ports\nI0416 04:35:38.727780 1 service.go:441] Removing service port \"services-2553/nodeport-update-service:tcp-port\"\nI0416 04:35:38.727934 1 service.go:441] Removing service port \"services-2553/nodeport-update-service:udp-port\"\nI0416 04:35:38.728100 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:38.814571 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"86.790373ms\"\nI0416 04:35:38.814675 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:38.845032 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.428363ms\"\nI0416 04:35:47.639637 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:47.671204 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.669981ms\"\nI0416 04:35:47.878242 1 service.go:301] Service services-1477/sourceip-test updated: 0 ports\nI0416 04:35:47.878337 1 service.go:441] Removing service port \"services-1477/sourceip-test\"\nI0416 04:35:47.878495 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:47.999810 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"121.46112ms\"\nI0416 04:35:49.000284 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:49.034070 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.951495ms\"\nI0416 04:35:52.725336 1 service.go:301] Service webhook-7388/e2e-test-webhook updated: 1 ports\nI0416 04:35:52.725379 1 service.go:416] Adding new service port \"webhook-7388/e2e-test-webhook\" at 100.64.25.135:8443/TCP\nI0416 04:35:52.725796 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:52.757337 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.957053ms\"\nI0416 04:35:52.757445 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:52.810530 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.159848ms\"\nI0416 04:35:56.707121 1 service.go:301] Service kubectl-8920/agnhost-primary updated: 1 ports\nI0416 04:35:56.707747 1 service.go:416] Adding new service port \"kubectl-8920/agnhost-primary\" at 100.70.175.221:6379/TCP\nI0416 04:35:56.707961 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:56.769143 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"61.969129ms\"\nI0416 04:35:56.770240 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:56.822700 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.521422ms\"\nI0416 04:35:57.303029 1 service.go:301] Service webhook-7388/e2e-test-webhook updated: 0 ports\nI0416 04:35:57.824539 1 service.go:441] Removing service port \"webhook-7388/e2e-test-webhook\"\nI0416 04:35:57.824829 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:35:57.876294 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.761716ms\"\nI0416 04:36:11.498389 1 service.go:301] Service services-5823/tolerate-unready updated: 1 ports\nI0416 04:36:11.498977 1 service.go:416] Adding new service port \"services-5823/tolerate-unready:http\" at 100.67.60.139:80/TCP\nI0416 04:36:11.499195 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:11.533821 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.851261ms\"\nI0416 04:36:11.534014 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:11.574174 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.242521ms\"\nI0416 04:36:12.180242 1 service.go:301] Service kubectl-8920/agnhost-primary updated: 0 ports\nI0416 04:36:12.575630 1 service.go:441] Removing service port \"kubectl-8920/agnhost-primary\"\nI0416 04:36:12.575773 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:12.624675 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.054962ms\"\nI0416 04:36:13.946233 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:13.984736 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.612445ms\"\nI0416 04:36:15.459433 1 service.go:301] Service services-6780/service-headless-toggled updated: 0 ports\nI0416 04:36:15.459642 1 service.go:441] Removing service port \"services-6780/service-headless-toggled\"\nI0416 04:36:15.459890 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:15.538316 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"78.667538ms\"\nI0416 04:36:15.540719 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:15.629115 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"90.667561ms\"\nI0416 04:36:22.520871 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:22.582881 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"62.158695ms\"\nI0416 04:36:22.583584 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:22.650355 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"67.426865ms\"\nI0416 04:36:22.731798 1 service.go:301] Service services-6926/service-proxy-toggled updated: 0 ports\nI0416 04:36:23.650562 1 service.go:441] Removing service port \"services-6926/service-proxy-toggled\"\nI0416 04:36:23.651185 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:23.728538 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"77.987811ms\"\nI0416 04:36:27.248822 1 service.go:301] Service services-1817/externalip-test updated: 0 ports\nI0416 04:36:27.249326 1 service.go:441] Removing service port \"services-1817/externalip-test:http\"\nI0416 04:36:27.249495 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:27.300444 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.125728ms\"\nI0416 04:36:27.300628 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:27.345705 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.206849ms\"\nI0416 04:36:33.769656 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:33.829599 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"60.048837ms\"\nI0416 04:36:34.000658 1 service.go:301] Service services-31/endpoint-test2 updated: 0 ports\nI0416 04:36:34.000849 1 service.go:441] Removing service port \"services-31/endpoint-test2\"\nI0416 04:36:34.001327 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:34.039623 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.771142ms\"\nI0416 04:36:35.039938 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:35.077756 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.989686ms\"\nI0416 04:36:39.733626 1 service.go:301] Service endpointslice-5636/example-empty-selector updated: 1 ports\nI0416 04:36:39.734646 1 service.go:416] Adding new service port \"endpointslice-5636/example-empty-selector:example\" at 100.64.7.57:80/TCP\nI0416 04:36:39.734820 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:39.771962 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.28482ms\"\nI0416 04:36:39.772093 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:39.798949 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"26.959779ms\"\nI0416 04:36:40.440843 1 service.go:301] Service endpointslice-5636/example-empty-selector updated: 0 ports\nI0416 04:36:40.799816 1 service.go:441] Removing service port \"endpointslice-5636/example-empty-selector:example\"\nI0416 04:36:40.800016 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:36:40.837891 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.109383ms\"\nI0416 04:37:02.485972 1 service.go:301] Service aggregator-3805/sample-api updated: 1 ports\nI0416 04:37:02.486017 1 service.go:416] Adding new service port \"aggregator-3805/sample-api\" at 100.71.142.186:7443/TCP\nI0416 04:37:02.486233 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:37:02.518406 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.379538ms\"\nI0416 04:37:02.518539 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:37:02.561457 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"43.009356ms\"\nI0416 04:37:03.777393 1 service.go:301] Service apply-3502/test-svc updated: 1 ports\nI0416 04:37:03.777430 1 service.go:416] Adding new service port \"apply-3502/test-svc\" at 100.68.162.205:8080/UDP\nI0416 04:37:03.777588 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:37:03.808926 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.493243ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-42-21.ap-south-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-50-117.ap-south-1.compute.internal ====\nI0416 04:16:19.542745 1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0416 04:16:19.543395 1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0416 04:16:19.543408 1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0416 04:16:19.543416 1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0416 04:16:19.543423 1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0416 04:16:19.543429 1 flags.go:59] FLAG: --cleanup=\"false\"\nI0416 04:16:19.543506 1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0416 04:16:19.543513 1 flags.go:59] FLAG: --config=\"\"\nI0416 04:16:19.543518 1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0416 04:16:19.543525 1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0416 04:16:19.543531 1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0416 04:16:19.543536 1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0416 04:16:19.543541 1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0416 04:16:19.543550 1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0416 04:16:19.543556 1 flags.go:59] FLAG: --feature-gates=\"\"\nI0416 04:16:19.543562 1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0416 04:16:19.543568 1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0416 04:16:19.543573 1 flags.go:59] FLAG: --help=\"false\"\nI0416 04:16:19.543578 1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-50-117.ap-south-1.compute.internal\"\nI0416 04:16:19.543583 1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0416 04:16:19.543592 1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0416 04:16:19.543597 1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0416 04:16:19.543693 1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0416 04:16:19.543722 1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0416 04:16:19.543727 1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0416 04:16:19.543732 1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0416 04:16:19.543736 1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0416 04:16:19.543745 1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0416 04:16:19.543750 1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0416 04:16:19.543754 1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0416 04:16:19.543759 1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0416 04:16:19.543764 1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0416 04:16:19.543769 1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0416 04:16:19.543776 1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0416 04:16:19.543785 1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0416 04:16:19.543794 1 flags.go:59] FLAG: --log-dir=\"\"\nI0416 04:16:19.543799 1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0416 04:16:19.543807 1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0416 04:16:19.543812 1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0416 04:16:19.543817 1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0416 04:16:19.543821 1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0416 04:16:19.543832 1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0416 04:16:19.543837 1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io\"\nI0416 04:16:19.543844 1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0416 04:16:19.543849 1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0416 04:16:19.543853 1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0416 04:16:19.543936 1 flags.go:59] FLAG: --one-output=\"false\"\nI0416 04:16:19.543942 1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0416 04:16:19.543951 1 flags.go:59] FLAG: --profiling=\"false\"\nI0416 04:16:19.543956 1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0416 04:16:19.543962 1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0416 04:16:19.543968 1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0416 04:16:19.543978 1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0416 04:16:19.543984 1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0416 04:16:19.543988 1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0416 04:16:19.543997 1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0416 04:16:19.544002 1 flags.go:59] FLAG: --v=\"2\"\nI0416 04:16:19.544007 1 flags.go:59] FLAG: --version=\"false\"\nI0416 04:16:19.544015 1 flags.go:59] FLAG: --vmodule=\"\"\nI0416 04:16:19.544020 1 flags.go:59] FLAG: --write-config-to=\"\"\nW0416 04:16:19.544032 1 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0416 04:16:19.544155 1 feature_gate.go:245] feature gates: &{map[]}\nI0416 04:16:19.544377 1 feature_gate.go:245] feature gates: &{map[]}\nE0416 04:16:49.592061 1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-grid-flannel-amzn2-k22-ko22-containerd.test-cncf-aws.k8s.io/api/v1/nodes/ip-172-20-50-117.ap-south-1.compute.internal\": dial tcp 203.0.113.123:443: i/o timeout\nI0416 04:16:50.784895 1 node.go:172] Successfully retrieved node IP: 172.20.50.117\nI0416 04:16:50.784923 1 server_others.go:140] Detected node IP 172.20.50.117\nW0416 04:16:50.784944 1 server_others.go:565] Unknown proxy mode \"\", assuming iptables proxy\nI0416 04:16:50.785030 1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI0416 04:16:50.818411 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI0416 04:16:50.818442 1 server_others.go:212] Using iptables Proxier.\nI0416 04:16:50.818455 1 server_others.go:219] creating dualStackProxier for iptables.\nW0416 04:16:50.818469 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI0416 04:16:50.818542 1 utils.go:370] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0416 04:16:50.818599 1 proxier.go:276] \"Missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI0416 04:16:50.818622 1 proxier.go:282] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0416 04:16:50.818668 1 proxier.go:328] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0416 04:16:50.818726 1 proxier.go:338] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0416 04:16:50.818789 1 proxier.go:276] \"Missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI0416 04:16:50.818805 1 proxier.go:282] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0416 04:16:50.818846 1 proxier.go:328] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0416 04:16:50.818870 1 proxier.go:338] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0416 04:16:50.819014 1 server.go:649] Version: v1.22.8\nI0416 04:16:50.820280 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0416 04:16:50.820317 1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0416 04:16:50.820411 1 mount_linux.go:207] Detected OS without systemd\nI0416 04:16:50.820835 1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0416 04:16:50.827790 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0416 04:16:50.827851 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0416 04:16:50.828053 1 config.go:315] Starting service config controller\nI0416 04:16:50.828072 1 shared_informer.go:240] Waiting for caches to sync for service config\nI0416 04:16:50.828098 1 config.go:224] Starting endpoint slice config controller\nI0416 04:16:50.828104 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0416 04:16:50.829990 1 service.go:301] Service default/kubernetes updated: 1 ports\nI0416 04:16:50.830029 1 service.go:301] Service kube-system/kube-dns updated: 3 ports\nI0416 04:16:50.928695 1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0416 04:16:50.929069 1 proxier.go:805] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0416 04:16:50.929240 1 proxier.go:805] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0416 04:16:50.928700 1 shared_informer.go:247] Caches are synced for service config \nI0416 04:16:50.929476 1 service.go:416] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0416 04:16:50.929498 1 service.go:416] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0416 04:16:50.929547 1 service.go:416] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0416 04:16:50.929607 1 service.go:416] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0416 04:16:50.929709 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:50.981368 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.913718ms\"\nI0416 04:16:50.981605 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:51.017648 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.046593ms\"\nI0416 04:16:56.071709 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:56.106995 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.374239ms\"\nI0416 04:16:56.107208 1 proxier.go:830] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0416 04:16:56.107275 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:56.144084 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.035528ms\"\nI0416 04:16:57.157170 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:57.187108 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.976787ms\"\nI0416 04:16:58.187599 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:16:58.220704 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.166683ms\"\nI0416 04:20:23.736429 1 service.go:301] Service services-5694/hairpin-test updated: 1 ports\nI0416 04:20:23.736501 1 service.go:416] Adding new service port \"services-5694/hairpin-test\" at 100.71.71.24:8080/TCP\nI0416 04:20:23.736533 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:23.791179 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"54.698878ms\"\nI0416 04:20:23.791229 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:23.828985 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.762419ms\"\nW0416 04:20:25.095711 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing5t7tp\nW0416 04:20:25.331095 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrf92q\nW0416 04:20:25.566157 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:26.981768 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:27.452097 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:27.688362 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingpmlck\nW0416 04:20:28.394911 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ing5t7tp\nW0416 04:20:28.396763 1 endpoints.go:276] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrf92q\nI0416 04:20:28.733365 1 service.go:301] Service proxy-8835/proxy-service-gl6lf updated: 4 ports\nI0416 04:20:28.733406 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:portname2\" at 100.71.171.178:81/TCP\nI0416 04:20:28.733422 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:tlsportname1\" at 100.71.171.178:443/TCP\nI0416 04:20:28.733432 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:tlsportname2\" at 100.71.171.178:444/TCP\nI0416 04:20:28.733442 1 service.go:416] Adding new service port \"proxy-8835/proxy-service-gl6lf:portname1\" at 100.71.171.178:80/TCP\nI0416 04:20:28.733475 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:28.772301 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.881678ms\"\nI0416 04:20:28.772496 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:28.809132 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.65007ms\"\nI0416 04:20:29.978368 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:30.035810 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"57.467582ms\"\nI0416 04:20:31.036944 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:31.084795 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.935342ms\"\nI0416 04:20:32.085594 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:32.150715 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"65.217792ms\"\nI0416 04:20:38.422517 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:38.462426 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.9749ms\"\nI0416 04:20:39.716265 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:39.755158 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.968005ms\"\nI0416 04:20:39.755238 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:39.798731 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"43.52852ms\"\nI0416 04:20:42.906019 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:42.958253 1 service.go:301] Service services-5694/hairpin-test updated: 0 ports\nI0416 04:20:42.978602 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"72.628422ms\"\nI0416 04:20:42.978640 1 service.go:441] Removing service port \"services-5694/hairpin-test\"\nI0416 04:20:42.978698 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:43.059078 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"80.421121ms\"\nI0416 04:20:45.297446 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:45.329934 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.507059ms\"\nI0416 04:20:45.394049 1 service.go:301] Service proxy-8835/proxy-service-gl6lf updated: 0 ports\nI0416 04:20:45.394155 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:tlsportname1\"\nI0416 04:20:45.394186 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:tlsportname2\"\nI0416 04:20:45.394219 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:portname1\"\nI0416 04:20:45.394251 1 service.go:441] Removing service port \"proxy-8835/proxy-service-gl6lf:portname2\"\nI0416 04:20:45.394304 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:20:45.431332 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"37.166788ms\"\nI0416 04:21:31.558787 1 service.go:301] Service services-538/test-service-6bpqw updated: 1 ports\nI0416 04:21:31.558828 1 service.go:416] Adding new service port \"services-538/test-service-6bpqw:http\" at 100.66.168.152:80/TCP\nI0416 04:21:31.558862 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:31.591216 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.382171ms\"\nI0416 04:21:32.269232 1 service.go:301] Service services-538/test-service-6bpqw updated: 1 ports\nI0416 04:21:32.269275 1 service.go:418] Updating existing service port \"services-538/test-service-6bpqw:http\" at 100.66.168.152:80/TCP\nI0416 04:21:32.269309 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:32.304406 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.125929ms\"\nI0416 04:21:33.933052 1 service.go:301] Service services-538/test-service-6bpqw updated: 0 ports\nI0416 04:21:33.933092 1 service.go:441] Removing service port \"services-538/test-service-6bpqw:http\"\nI0416 04:21:33.933128 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:33.966314 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.211232ms\"\nI0416 04:21:44.981777 1 service.go:301] Service svc-latency-3981/latency-svc-49w6x updated: 1 ports\nI0416 04:21:44.981822 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-49w6x\" at 100.64.111.15:80/TCP\nI0416 04:21:44.981857 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:45.014192 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.363294ms\"\nI0416 04:21:45.014389 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:45.044235 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.866081ms\"\nI0416 04:21:45.238956 1 service.go:301] Service svc-latency-3981/latency-svc-fglqx updated: 1 ports\nI0416 04:21:45.247807 1 service.go:301] Service svc-latency-3981/latency-svc-p9h96 updated: 1 ports\nI0416 04:21:45.253182 1 service.go:301] Service svc-latency-3981/latency-svc-qrm7g updated: 1 ports\nI0416 04:21:45.260207 1 service.go:301] Service svc-latency-3981/latency-svc-qmd8f updated: 1 ports\nI0416 04:21:45.278198 1 service.go:301] Service svc-latency-3981/latency-svc-fw4ht updated: 1 ports\nI0416 04:21:45.473582 1 service.go:301] Service svc-latency-3981/latency-svc-2x2b2 updated: 1 ports\nI0416 04:21:45.496699 1 service.go:301] Service svc-latency-3981/latency-svc-c8p2l updated: 1 ports\nI0416 04:21:45.502227 1 service.go:301] Service svc-latency-3981/latency-svc-vcpfc updated: 1 ports\nI0416 04:21:45.510016 1 service.go:301] Service svc-latency-3981/latency-svc-xjbfs updated: 1 ports\nI0416 04:21:45.516610 1 service.go:301] Service svc-latency-3981/latency-svc-q7xh9 updated: 1 ports\nI0416 04:21:45.529164 1 service.go:301] Service svc-latency-3981/latency-svc-4sxrn updated: 1 ports\nI0416 04:21:45.531573 1 service.go:301] Service svc-latency-3981/latency-svc-pkvbd updated: 1 ports\nI0416 04:21:45.539252 1 service.go:301] Service svc-latency-3981/latency-svc-24kvw updated: 1 ports\nI0416 04:21:45.546103 1 service.go:301] Service svc-latency-3981/latency-svc-x2k7p updated: 1 ports\nI0416 04:21:45.552378 1 service.go:301] Service svc-latency-3981/latency-svc-p7zbc updated: 1 ports\nI0416 04:21:45.556868 1 service.go:301] Service svc-latency-3981/latency-svc-2lf5w updated: 1 ports\nI0416 04:21:45.565321 1 service.go:301] Service svc-latency-3981/latency-svc-r7mp2 updated: 1 ports\nI0416 04:21:45.571359 1 service.go:301] Service svc-latency-3981/latency-svc-dtb4m updated: 1 ports\nI0416 04:21:45.577158 1 service.go:301] Service svc-latency-3981/latency-svc-lhtjh updated: 1 ports\nI0416 04:21:45.583866 1 service.go:301] Service svc-latency-3981/latency-svc-6txzc updated: 1 ports\nI0416 04:21:45.732937 1 service.go:301] Service svc-latency-3981/latency-svc-j88m6 updated: 1 ports\nI0416 04:21:45.759003 1 service.go:301] Service svc-latency-3981/latency-svc-kw5hl updated: 1 ports\nI0416 04:21:45.764587 1 service.go:301] Service svc-latency-3981/latency-svc-d7rjl updated: 1 ports\nI0416 04:21:45.769298 1 service.go:301] Service svc-latency-3981/latency-svc-d9q85 updated: 1 ports\nI0416 04:21:45.777156 1 service.go:301] Service svc-latency-3981/latency-svc-qgpmd updated: 1 ports\nI0416 04:21:45.786740 1 service.go:301] Service svc-latency-3981/latency-svc-gdvbv updated: 1 ports\nI0416 04:21:45.795926 1 service.go:301] Service svc-latency-3981/latency-svc-kwzpv updated: 1 ports\nI0416 04:21:45.809062 1 service.go:301] Service svc-latency-3981/latency-svc-6kkhg updated: 1 ports\nI0416 04:21:45.815511 1 service.go:301] Service svc-latency-3981/latency-svc-tnsf2 updated: 1 ports\nI0416 04:21:45.824393 1 service.go:301] Service svc-latency-3981/latency-svc-g85nc updated: 1 ports\nI0416 04:21:45.838177 1 service.go:301] Service svc-latency-3981/latency-svc-x88zw updated: 1 ports\nI0416 04:21:45.844988 1 service.go:301] Service svc-latency-3981/latency-svc-7vw8t updated: 1 ports\nI0416 04:21:45.860594 1 service.go:301] Service svc-latency-3981/latency-svc-4mdx5 updated: 1 ports\nI0416 04:21:45.971912 1 service.go:301] Service svc-latency-3981/latency-svc-n2tgb updated: 1 ports\nI0416 04:21:45.976874 1 service.go:301] Service svc-latency-3981/latency-svc-r7jf4 updated: 1 ports\nI0416 04:21:45.983445 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-c8p2l\" at 100.67.55.209:80/TCP\nI0416 04:21:45.983484 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4sxrn\" at 100.66.25.231:80/TCP\nI0416 04:21:45.983496 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6kkhg\" at 100.68.35.41:80/TCP\nI0416 04:21:45.983506 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-tnsf2\" at 100.71.133.105:80/TCP\nI0416 04:21:45.983521 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fglqx\" at 100.65.93.75:80/TCP\nI0416 04:21:45.983533 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qrm7g\" at 100.69.205.223:80/TCP\nI0416 04:21:45.983545 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-r7mp2\" at 100.65.132.22:80/TCP\nI0416 04:21:45.983572 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-lhtjh\" at 100.70.10.168:80/TCP\nI0416 04:21:45.983587 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-j88m6\" at 100.67.135.125:80/TCP\nI0416 04:21:45.983639 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-gdvbv\" at 100.67.231.202:80/TCP\nI0416 04:21:45.983657 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g85nc\" at 100.65.75.242:80/TCP\nI0416 04:21:45.983669 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-p9h96\" at 100.67.33.69:80/TCP\nI0416 04:21:45.983682 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2x2b2\" at 100.69.96.72:80/TCP\nI0416 04:21:45.983699 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-x88zw\" at 100.70.8.125:80/TCP\nI0416 04:21:45.983727 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-r7jf4\" at 100.69.233.115:80/TCP\nI0416 04:21:45.983741 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-24kvw\" at 100.71.193.194:80/TCP\nI0416 04:21:45.983755 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6txzc\" at 100.65.135.3:80/TCP\nI0416 04:21:45.983766 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2lf5w\" at 100.66.247.166:80/TCP\nI0416 04:21:45.983790 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qgpmd\" at 100.64.171.223:80/TCP\nI0416 04:21:45.983805 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xjbfs\" at 100.67.123.74:80/TCP\nI0416 04:21:45.983819 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pkvbd\" at 100.71.146.204:80/TCP\nI0416 04:21:45.983834 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kw5hl\" at 100.65.168.41:80/TCP\nI0416 04:21:45.983853 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d7rjl\" at 100.69.106.86:80/TCP\nI0416 04:21:45.983867 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fw4ht\" at 100.71.233.83:80/TCP\nI0416 04:21:45.983883 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vcpfc\" at 100.70.202.77:80/TCP\nI0416 04:21:45.983898 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kwzpv\" at 100.65.158.70:80/TCP\nI0416 04:21:45.983914 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d9q85\" at 100.66.226.120:80/TCP\nI0416 04:21:45.983928 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4mdx5\" at 100.65.65.97:80/TCP\nI0416 04:21:45.983943 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-n2tgb\" at 100.67.18.53:80/TCP\nI0416 04:21:45.983960 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qmd8f\" at 100.70.201.118:80/TCP\nI0416 04:21:45.983985 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-x2k7p\" at 100.68.94.179:80/TCP\nI0416 04:21:45.983998 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dtb4m\" at 100.69.228.146:80/TCP\nI0416 04:21:45.984008 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7vw8t\" at 100.71.59.41:80/TCP\nI0416 04:21:45.984022 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q7xh9\" at 100.70.95.236:80/TCP\nI0416 04:21:45.984046 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-p7zbc\" at 100.70.179.25:80/TCP\nI0416 04:21:45.986474 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:45.988776 1 service.go:301] Service svc-latency-3981/latency-svc-44wwt updated: 1 ports\nI0416 04:21:46.010771 1 service.go:301] Service svc-latency-3981/latency-svc-xf99p updated: 1 ports\nI0416 04:21:46.031249 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.804644ms\"\nI0416 04:21:46.041532 1 service.go:301] Service svc-latency-3981/latency-svc-qxhct updated: 1 ports\nI0416 04:21:46.054045 1 service.go:301] Service svc-latency-3981/latency-svc-kr2r7 updated: 1 ports\nI0416 04:21:46.060297 1 service.go:301] Service svc-latency-3981/latency-svc-9jdjb updated: 1 ports\nI0416 04:21:46.070304 1 service.go:301] Service svc-latency-3981/latency-svc-msttv updated: 1 ports\nI0416 04:21:46.076437 1 service.go:301] Service svc-latency-3981/latency-svc-bkp2m updated: 1 ports\nI0416 04:21:46.084604 1 service.go:301] Service svc-latency-3981/latency-svc-rdtjw updated: 1 ports\nI0416 04:21:46.091368 1 service.go:301] Service svc-latency-3981/latency-svc-xrnwp updated: 1 ports\nI0416 04:21:46.095674 1 service.go:301] Service svc-latency-3981/latency-svc-7mgwx updated: 1 ports\nI0416 04:21:46.104581 1 service.go:301] Service svc-latency-3981/latency-svc-g248w updated: 1 ports\nI0416 04:21:46.111483 1 service.go:301] Service svc-latency-3981/latency-svc-xct4m updated: 1 ports\nI0416 04:21:46.117979 1 service.go:301] Service svc-latency-3981/latency-svc-4dzlb updated: 1 ports\nI0416 04:21:46.225695 1 service.go:301] Service svc-latency-3981/latency-svc-9rjdq updated: 1 ports\nI0416 04:21:46.231219 1 service.go:301] Service svc-latency-3981/latency-svc-bxzdt updated: 1 ports\nI0416 04:21:46.244820 1 service.go:301] Service svc-latency-3981/latency-svc-4wd85 updated: 1 ports\nI0416 04:21:46.288064 1 service.go:301] Service svc-latency-3981/latency-svc-46zh6 updated: 1 ports\nI0416 04:21:46.300104 1 service.go:301] Service svc-latency-3981/latency-svc-8jtd7 updated: 1 ports\nI0416 04:21:46.308381 1 service.go:301] Service svc-latency-3981/latency-svc-vhd5k updated: 1 ports\nI0416 04:21:46.312770 1 service.go:301] Service svc-latency-3981/latency-svc-wd4zt updated: 1 ports\nI0416 04:21:46.318823 1 service.go:301] Service svc-latency-3981/latency-svc-wv8nx updated: 1 ports\nI0416 04:21:46.324696 1 service.go:301] Service svc-latency-3981/latency-svc-2w5ct updated: 1 ports\nI0416 04:21:46.331204 1 service.go:301] Service svc-latency-3981/latency-svc-zpx4h updated: 1 ports\nI0416 04:21:46.338940 1 service.go:301] Service svc-latency-3981/latency-svc-kjg25 updated: 1 ports\nI0416 04:21:46.364695 1 service.go:301] Service svc-latency-3981/latency-svc-5t6sm updated: 1 ports\nI0416 04:21:46.415702 1 service.go:301] Service svc-latency-3981/latency-svc-9fnrp updated: 1 ports\nI0416 04:21:46.470504 1 service.go:301] Service svc-latency-3981/latency-svc-wx822 updated: 1 ports\nI0416 04:21:46.533847 1 service.go:301] Service svc-latency-3981/latency-svc-cz7hv updated: 1 ports\nI0416 04:21:46.567715 1 service.go:301] Service svc-latency-3981/latency-svc-fzc9p updated: 1 ports\nI0416 04:21:46.614993 1 service.go:301] Service svc-latency-3981/latency-svc-d2xrl updated: 1 ports\nI0416 04:21:46.667857 1 service.go:301] Service svc-latency-3981/latency-svc-xjlnp updated: 1 ports\nI0416 04:21:46.716215 1 service.go:301] Service svc-latency-3981/latency-svc-56brr updated: 1 ports\nI0416 04:21:46.769913 1 service.go:301] Service svc-latency-3981/latency-svc-jrn8c updated: 1 ports\nI0416 04:21:46.818937 1 service.go:301] Service svc-latency-3981/latency-svc-fbqps updated: 1 ports\nI0416 04:21:46.865866 1 service.go:301] Service svc-latency-3981/latency-svc-9nx8j updated: 1 ports\nI0416 04:21:46.916046 1 service.go:301] Service svc-latency-3981/latency-svc-qrjc4 updated: 1 ports\nI0416 04:21:46.968743 1 service.go:301] Service svc-latency-3981/latency-svc-ggwzr updated: 1 ports\nI0416 04:21:47.019951 1 service.go:301] Service svc-latency-3981/latency-svc-m9cps updated: 1 ports\nI0416 04:21:47.019987 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xrnwp\" at 100.68.210.98:80/TCP\nI0416 04:21:47.020014 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vhd5k\" at 100.67.244.172:80/TCP\nI0416 04:21:47.020023 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-44wwt\" at 100.64.61.245:80/TCP\nI0416 04:21:47.020030 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xf99p\" at 100.65.77.139:80/TCP\nI0416 04:21:47.020037 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9jdjb\" at 100.69.169.195:80/TCP\nI0416 04:21:47.020043 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-56brr\" at 100.71.233.229:80/TCP\nI0416 04:21:47.020050 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kr2r7\" at 100.65.210.191:80/TCP\nI0416 04:21:47.020057 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-bkp2m\" at 100.70.196.39:80/TCP\nI0416 04:21:47.020063 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wd4zt\" at 100.71.167.7:80/TCP\nI0416 04:21:47.020081 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9nx8j\" at 100.70.14.217:80/TCP\nI0416 04:21:47.020088 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2w5ct\" at 100.70.102.178:80/TCP\nI0416 04:21:47.020095 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kjg25\" at 100.68.14.101:80/TCP\nI0416 04:21:47.020101 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-rdtjw\" at 100.70.121.175:80/TCP\nI0416 04:21:47.020114 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7mgwx\" at 100.65.202.29:80/TCP\nI0416 04:21:47.020122 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9rjdq\" at 100.69.160.255:80/TCP\nI0416 04:21:47.020129 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-bxzdt\" at 100.67.116.74:80/TCP\nI0416 04:21:47.020137 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4wd85\" at 100.68.132.217:80/TCP\nI0416 04:21:47.020144 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wv8nx\" at 100.67.41.194:80/TCP\nI0416 04:21:47.020163 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5t6sm\" at 100.66.151.119:80/TCP\nI0416 04:21:47.020169 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fzc9p\" at 100.64.130.130:80/TCP\nI0416 04:21:47.020175 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xjlnp\" at 100.65.150.47:80/TCP\nI0416 04:21:47.020181 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d2xrl\" at 100.71.159.33:80/TCP\nI0416 04:21:47.020187 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qxhct\" at 100.67.178.161:80/TCP\nI0416 04:21:47.020193 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g248w\" at 100.66.28.205:80/TCP\nI0416 04:21:47.020199 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xct4m\" at 100.70.181.194:80/TCP\nI0416 04:21:47.020205 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-46zh6\" at 100.70.80.169:80/TCP\nI0416 04:21:47.020236 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wx822\" at 100.66.228.104:80/TCP\nI0416 04:21:47.020246 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-cz7hv\" at 100.68.180.60:80/TCP\nI0416 04:21:47.020253 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4dzlb\" at 100.65.7.44:80/TCP\nI0416 04:21:47.020259 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-jrn8c\" at 100.67.197.2:80/TCP\nI0416 04:21:47.020265 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fbqps\" at 100.70.162.54:80/TCP\nI0416 04:21:47.020273 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-m9cps\" at 100.70.127.202:80/TCP\nI0416 04:21:47.020279 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-msttv\" at 100.67.212.159:80/TCP\nI0416 04:21:47.020285 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8jtd7\" at 100.66.160.170:80/TCP\nI0416 04:21:47.020291 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zpx4h\" at 100.69.118.238:80/TCP\nI0416 04:21:47.020308 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9fnrp\" at 100.70.105.191:80/TCP\nI0416 04:21:47.020316 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qrjc4\" at 100.66.193.174:80/TCP\nI0416 04:21:47.020322 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-ggwzr\" at 100.70.21.218:80/TCP\nI0416 04:21:47.020925 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:47.066445 1 service.go:301] Service svc-latency-3981/latency-svc-k74n8 updated: 1 ports\nI0416 04:21:47.067384 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.391274ms\"\nI0416 04:21:47.116939 1 service.go:301] Service svc-latency-3981/latency-svc-8f5zr updated: 1 ports\nI0416 04:21:47.164995 1 service.go:301] Service svc-latency-3981/latency-svc-qx9r9 updated: 1 ports\nI0416 04:21:47.218330 1 service.go:301] Service svc-latency-3981/latency-svc-czjpm updated: 1 ports\nI0416 04:21:47.277674 1 service.go:301] Service svc-latency-3981/latency-svc-z5ghz updated: 1 ports\nI0416 04:21:47.326528 1 service.go:301] Service svc-latency-3981/latency-svc-mj5h5 updated: 1 ports\nI0416 04:21:47.373794 1 service.go:301] Service svc-latency-3981/latency-svc-l5k7g updated: 1 ports\nI0416 04:21:47.417522 1 service.go:301] Service svc-latency-3981/latency-svc-xkfsd updated: 1 ports\nI0416 04:21:47.467373 1 service.go:301] Service svc-latency-3981/latency-svc-fpts9 updated: 1 ports\nI0416 04:21:47.524378 1 service.go:301] Service svc-latency-3981/latency-svc-kk9qz updated: 1 ports\nI0416 04:21:47.572410 1 service.go:301] Service svc-latency-3981/latency-svc-444nw updated: 1 ports\nI0416 04:21:47.656675 1 service.go:301] Service svc-latency-3981/latency-svc-xkcrs updated: 1 ports\nI0416 04:21:47.679467 1 service.go:301] Service svc-latency-3981/latency-svc-7fk9l updated: 1 ports\nI0416 04:21:47.733371 1 service.go:301] Service svc-latency-3981/latency-svc-8v8st updated: 1 ports\nI0416 04:21:47.767871 1 service.go:301] Service svc-latency-3981/latency-svc-hn2xn updated: 1 ports\nI0416 04:21:47.816361 1 service.go:301] Service svc-latency-3981/latency-svc-zvshw updated: 1 ports\nI0416 04:21:47.884370 1 service.go:301] Service svc-latency-3981/latency-svc-5dll9 updated: 1 ports\nI0416 04:21:47.923321 1 service.go:301] Service svc-latency-3981/latency-svc-mk26h updated: 1 ports\nI0416 04:21:47.968461 1 service.go:301] Service svc-latency-3981/latency-svc-nmb8h updated: 1 ports\nI0416 04:21:48.016013 1 service.go:301] Service svc-latency-3981/latency-svc-6kn8x updated: 1 ports\nI0416 04:21:48.016059 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8f5zr\" at 100.69.43.32:80/TCP\nI0416 04:21:48.016076 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-czjpm\" at 100.68.130.85:80/TCP\nI0416 04:21:48.016103 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-mj5h5\" at 100.70.184.216:80/TCP\nI0416 04:21:48.016115 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xkfsd\" at 100.67.246.40:80/TCP\nI0416 04:21:48.016124 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-444nw\" at 100.68.152.16:80/TCP\nI0416 04:21:48.016133 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qx9r9\" at 100.71.233.134:80/TCP\nI0416 04:21:48.016141 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-z5ghz\" at 100.69.114.121:80/TCP\nI0416 04:21:48.016151 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7fk9l\" at 100.71.219.49:80/TCP\nI0416 04:21:48.016160 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hn2xn\" at 100.64.40.137:80/TCP\nI0416 04:21:48.016169 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zvshw\" at 100.67.82.79:80/TCP\nI0416 04:21:48.016179 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5dll9\" at 100.66.248.10:80/TCP\nI0416 04:21:48.016188 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-mk26h\" at 100.65.114.132:80/TCP\nI0416 04:21:48.016198 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-nmb8h\" at 100.65.199.81:80/TCP\nI0416 04:21:48.016214 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-k74n8\" at 100.64.232.221:80/TCP\nI0416 04:21:48.016237 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-l5k7g\" at 100.64.147.167:80/TCP\nI0416 04:21:48.016248 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6kn8x\" at 100.66.201.0:80/TCP\nI0416 04:21:48.016258 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fpts9\" at 100.69.103.84:80/TCP\nI0416 04:21:48.016269 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kk9qz\" at 100.70.244.22:80/TCP\nI0416 04:21:48.016278 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xkcrs\" at 100.64.51.110:80/TCP\nI0416 04:21:48.016288 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8v8st\" at 100.70.248.203:80/TCP\nI0416 04:21:48.016493 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:48.091020 1 service.go:301] Service svc-latency-3981/latency-svc-l6j5h updated: 1 ports\nI0416 04:21:48.106596 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"90.522589ms\"\nI0416 04:21:48.115535 1 service.go:301] Service svc-latency-3981/latency-svc-7gg2n updated: 1 ports\nI0416 04:21:48.165825 1 service.go:301] Service svc-latency-3981/latency-svc-sjbwr updated: 1 ports\nI0416 04:21:48.215572 1 service.go:301] Service svc-latency-3981/latency-svc-tfgb8 updated: 1 ports\nI0416 04:21:48.268513 1 service.go:301] Service svc-latency-3981/latency-svc-w8pmh updated: 1 ports\nI0416 04:21:48.326189 1 service.go:301] Service svc-latency-3981/latency-svc-pjccn updated: 1 ports\nI0416 04:21:48.367052 1 service.go:301] Service svc-latency-3981/latency-svc-5jdjd updated: 1 ports\nI0416 04:21:48.419432 1 service.go:301] Service svc-latency-3981/latency-svc-xmxvf updated: 1 ports\nI0416 04:21:48.472798 1 service.go:301] Service svc-latency-3981/latency-svc-g947j updated: 1 ports\nI0416 04:21:48.537355 1 service.go:301] Service svc-latency-3981/latency-svc-q4kv2 updated: 1 ports\nI0416 04:21:48.566998 1 service.go:301] Service svc-latency-3981/latency-svc-hkppp updated: 1 ports\nI0416 04:21:48.616323 1 service.go:301] Service svc-latency-3981/latency-svc-t7sb8 updated: 1 ports\nI0416 04:21:48.667390 1 service.go:301] Service svc-latency-3981/latency-svc-qjn2p updated: 1 ports\nI0416 04:21:48.717106 1 service.go:301] Service svc-latency-3981/latency-svc-gz4hw updated: 1 ports\nI0416 04:21:48.767803 1 service.go:301] Service svc-latency-3981/latency-svc-jqc6f updated: 1 ports\nI0416 04:21:48.816756 1 service.go:301] Service svc-latency-3981/latency-svc-rqlv9 updated: 1 ports\nI0416 04:21:48.866025 1 service.go:301] Service svc-latency-3981/latency-svc-4bncr updated: 1 ports\nI0416 04:21:48.916592 1 service.go:301] Service svc-latency-3981/latency-svc-q4257 updated: 1 ports\nI0416 04:21:48.966293 1 service.go:301] Service svc-latency-3981/latency-svc-t9ht8 updated: 1 ports\nI0416 04:21:49.015178 1 service.go:301] Service svc-latency-3981/latency-svc-tmz8w updated: 1 ports\nI0416 04:21:49.015243 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-l6j5h\" at 100.68.140.150:80/TCP\nI0416 04:21:49.015257 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sjbwr\" at 100.68.37.97:80/TCP\nI0416 04:21:49.015267 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hkppp\" at 100.66.89.202:80/TCP\nI0416 04:21:49.015278 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-gz4hw\" at 100.70.28.55:80/TCP\nI0416 04:21:49.015286 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-jqc6f\" at 100.66.175.64:80/TCP\nI0416 04:21:49.015293 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-rqlv9\" at 100.68.239.189:80/TCP\nI0416 04:21:49.015299 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pjccn\" at 100.70.111.177:80/TCP\nI0416 04:21:49.015329 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q4kv2\" at 100.66.130.172:80/TCP\nI0416 04:21:49.015340 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-t7sb8\" at 100.70.201.185:80/TCP\nI0416 04:21:49.015352 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qjn2p\" at 100.71.157.148:80/TCP\nI0416 04:21:49.015364 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4bncr\" at 100.65.42.13:80/TCP\nI0416 04:21:49.015373 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-t9ht8\" at 100.69.194.95:80/TCP\nI0416 04:21:49.015380 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-tmz8w\" at 100.70.239.15:80/TCP\nI0416 04:21:49.015388 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7gg2n\" at 100.65.9.144:80/TCP\nI0416 04:21:49.015394 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-tfgb8\" at 100.68.232.109:80/TCP\nI0416 04:21:49.015404 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-w8pmh\" at 100.68.69.29:80/TCP\nI0416 04:21:49.015411 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5jdjd\" at 100.71.87.192:80/TCP\nI0416 04:21:49.015418 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xmxvf\" at 100.70.101.110:80/TCP\nI0416 04:21:49.015428 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g947j\" at 100.66.195.56:80/TCP\nI0416 04:21:49.015438 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q4257\" at 100.69.90.129:80/TCP\nI0416 04:21:49.015738 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:49.070467 1 service.go:301] Service svc-latency-3981/latency-svc-n8knn updated: 1 ports\nI0416 04:21:49.074343 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"59.10605ms\"\nI0416 04:21:49.117803 1 service.go:301] Service svc-latency-3981/latency-svc-z7k5t updated: 1 ports\nI0416 04:21:49.167860 1 service.go:301] Service svc-latency-3981/latency-svc-qkpxm updated: 1 ports\nI0416 04:21:49.216595 1 service.go:301] Service svc-latency-3981/latency-svc-dcljc updated: 1 ports\nI0416 04:21:49.268129 1 service.go:301] Service svc-latency-3981/latency-svc-f299t updated: 1 ports\nI0416 04:21:49.317963 1 service.go:301] Service svc-latency-3981/latency-svc-5d7hs updated: 1 ports\nI0416 04:21:49.366817 1 service.go:301] Service svc-latency-3981/latency-svc-v9ngm updated: 1 ports\nI0416 04:21:49.415505 1 service.go:301] Service svc-latency-3981/latency-svc-wb26g updated: 1 ports\nI0416 04:21:49.469320 1 service.go:301] Service svc-latency-3981/latency-svc-z48q8 updated: 1 ports\nI0416 04:21:49.519994 1 service.go:301] Service svc-latency-3981/latency-svc-wc4b7 updated: 1 ports\nI0416 04:21:49.568630 1 service.go:301] Service svc-latency-3981/latency-svc-6pnn8 updated: 1 ports\nI0416 04:21:49.622272 1 service.go:301] Service svc-latency-3981/latency-svc-zwvpj updated: 1 ports\nI0416 04:21:49.666640 1 service.go:301] Service svc-latency-3981/latency-svc-g6dmz updated: 1 ports\nI0416 04:21:49.719826 1 service.go:301] Service svc-latency-3981/latency-svc-q6x9d updated: 1 ports\nI0416 04:21:49.768157 1 service.go:301] Service svc-latency-3981/latency-svc-9422l updated: 1 ports\nI0416 04:21:49.821043 1 service.go:301] Service svc-latency-3981/latency-svc-pq8ml updated: 1 ports\nI0416 04:21:49.879757 1 service.go:301] Service svc-latency-3981/latency-svc-snpps updated: 1 ports\nI0416 04:21:49.915449 1 service.go:301] Service svc-latency-3981/latency-svc-xfghb updated: 1 ports\nI0416 04:21:50.016816 1 service.go:301] Service svc-latency-3981/latency-svc-r8wlh updated: 1 ports\nI0416 04:21:50.016851 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-f299t\" at 100.66.229.26:80/TCP\nI0416 04:21:50.016864 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-v9ngm\" at 100.71.250.70:80/TCP\nI0416 04:21:50.016876 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-q6x9d\" at 100.66.159.156:80/TCP\nI0416 04:21:50.016888 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9422l\" at 100.71.8.182:80/TCP\nI0416 04:21:50.016899 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-snpps\" at 100.64.12.16:80/TCP\nI0416 04:21:50.016910 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-r8wlh\" at 100.65.174.205:80/TCP\nI0416 04:21:50.016920 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-n8knn\" at 100.66.222.224:80/TCP\nI0416 04:21:50.016930 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qkpxm\" at 100.71.99.171:80/TCP\nI0416 04:21:50.016937 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dcljc\" at 100.70.81.56:80/TCP\nI0416 04:21:50.016943 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-z48q8\" at 100.68.235.122:80/TCP\nI0416 04:21:50.016950 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g6dmz\" at 100.71.129.123:80/TCP\nI0416 04:21:50.016957 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xfghb\" at 100.71.168.189:80/TCP\nI0416 04:21:50.016964 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-5d7hs\" at 100.64.119.193:80/TCP\nI0416 04:21:50.016974 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6pnn8\" at 100.69.101.241:80/TCP\nI0416 04:21:50.016985 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-z7k5t\" at 100.64.195.174:80/TCP\nI0416 04:21:50.016994 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wb26g\" at 100.69.67.110:80/TCP\nI0416 04:21:50.017000 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wc4b7\" at 100.69.253.43:80/TCP\nI0416 04:21:50.017008 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zwvpj\" at 100.68.103.200:80/TCP\nI0416 04:21:50.017014 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pq8ml\" at 100.65.243.100:80/TCP\nI0416 04:21:50.017249 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:50.067212 1 service.go:301] Service svc-latency-3981/latency-svc-86r5n updated: 1 ports\nI0416 04:21:50.122935 1 service.go:301] Service svc-latency-3981/latency-svc-k6gmx updated: 1 ports\nI0416 04:21:50.159510 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"142.64358ms\"\nI0416 04:21:50.169562 1 service.go:301] Service svc-latency-3981/latency-svc-8kwfl updated: 1 ports\nI0416 04:21:50.216322 1 service.go:301] Service svc-latency-3981/latency-svc-g6t7j updated: 1 ports\nI0416 04:21:50.267893 1 service.go:301] Service svc-latency-3981/latency-svc-hsjnr updated: 1 ports\nI0416 04:21:50.319335 1 service.go:301] Service svc-latency-3981/latency-svc-g2th9 updated: 1 ports\nI0416 04:21:50.365783 1 service.go:301] Service svc-latency-3981/latency-svc-pgcfp updated: 1 ports\nI0416 04:21:50.426870 1 service.go:301] Service svc-latency-3981/latency-svc-h9jzs updated: 1 ports\nI0416 04:21:50.464557 1 service.go:301] Service svc-latency-3981/latency-svc-sx4wq updated: 1 ports\nI0416 04:21:50.519897 1 service.go:301] Service svc-latency-3981/latency-svc-6d8r4 updated: 1 ports\nI0416 04:21:50.616637 1 service.go:301] Service svc-latency-3981/latency-svc-gmxj9 updated: 1 ports\nI0416 04:21:50.680998 1 service.go:301] Service svc-latency-3981/latency-svc-v28nm updated: 1 ports\nI0416 04:21:50.717339 1 service.go:301] Service svc-latency-3981/latency-svc-9w255 updated: 1 ports\nI0416 04:21:50.774304 1 service.go:301] Service svc-latency-3981/latency-svc-zbd7h updated: 1 ports\nI0416 04:21:50.814859 1 service.go:301] Service svc-latency-3981/latency-svc-fcc8g updated: 1 ports\nI0416 04:21:50.867278 1 service.go:301] Service svc-latency-3981/latency-svc-xh99l updated: 1 ports\nI0416 04:21:50.923545 1 service.go:301] Service svc-latency-3981/latency-svc-knbng updated: 1 ports\nI0416 04:21:50.982023 1 service.go:301] Service svc-latency-3981/latency-svc-jnmlb updated: 1 ports\nI0416 04:21:50.982191 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-k6gmx\" at 100.67.45.50:80/TCP\nI0416 04:21:50.982211 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sx4wq\" at 100.68.58.49:80/TCP\nI0416 04:21:50.982242 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-gmxj9\" at 100.68.133.11:80/TCP\nI0416 04:21:50.982258 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9w255\" at 100.68.235.37:80/TCP\nI0416 04:21:50.982273 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-fcc8g\" at 100.68.163.129:80/TCP\nI0416 04:21:50.982287 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g2th9\" at 100.68.46.251:80/TCP\nI0416 04:21:50.982374 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-86r5n\" at 100.65.5.63:80/TCP\nI0416 04:21:50.982393 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hsjnr\" at 100.69.249.163:80/TCP\nI0416 04:21:50.982407 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-h9jzs\" at 100.64.124.21:80/TCP\nI0416 04:21:50.982421 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zbd7h\" at 100.66.123.227:80/TCP\nI0416 04:21:50.982449 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-knbng\" at 100.65.215.160:80/TCP\nI0416 04:21:50.982463 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-jnmlb\" at 100.69.28.136:80/TCP\nI0416 04:21:50.982475 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8kwfl\" at 100.64.34.171:80/TCP\nI0416 04:21:50.982489 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-g6t7j\" at 100.65.227.249:80/TCP\nI0416 04:21:50.982503 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-pgcfp\" at 100.66.115.73:80/TCP\nI0416 04:21:50.982589 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6d8r4\" at 100.65.120.24:80/TCP\nI0416 04:21:50.982602 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-v28nm\" at 100.64.4.47:80/TCP\nI0416 04:21:50.982623 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-xh99l\" at 100.64.89.114:80/TCP\nI0416 04:21:50.983005 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:51.016208 1 service.go:301] Service svc-latency-3981/latency-svc-wzbj8 updated: 1 ports\nI0416 04:21:51.073467 1 service.go:301] Service svc-latency-3981/latency-svc-dks2n updated: 1 ports\nI0416 04:21:51.083284 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"101.093352ms\"\nI0416 04:21:51.125305 1 service.go:301] Service svc-latency-3981/latency-svc-qp4nv updated: 1 ports\nI0416 04:21:51.167675 1 service.go:301] Service svc-latency-3981/latency-svc-95lzs updated: 1 ports\nI0416 04:21:51.228173 1 service.go:301] Service svc-latency-3981/latency-svc-zhk72 updated: 1 ports\nI0416 04:21:51.267135 1 service.go:301] Service svc-latency-3981/latency-svc-j8895 updated: 1 ports\nI0416 04:21:51.317291 1 service.go:301] Service svc-latency-3981/latency-svc-k2chp updated: 1 ports\nI0416 04:21:51.373095 1 service.go:301] Service svc-latency-3981/latency-svc-92phz updated: 1 ports\nI0416 04:21:51.427397 1 service.go:301] Service svc-latency-3981/latency-svc-dr54h updated: 1 ports\nI0416 04:21:51.468313 1 service.go:301] Service svc-latency-3981/latency-svc-h5rwt updated: 1 ports\nI0416 04:21:51.550713 1 service.go:301] Service svc-latency-3981/latency-svc-c627r updated: 1 ports\nI0416 04:21:51.590965 1 service.go:301] Service svc-latency-3981/latency-svc-bk9mt updated: 1 ports\nI0416 04:21:51.632282 1 service.go:301] Service svc-latency-3981/latency-svc-7wnrp updated: 1 ports\nI0416 04:21:51.670753 1 service.go:301] Service svc-latency-3981/latency-svc-rkxrw updated: 1 ports\nI0416 04:21:51.715337 1 service.go:301] Service svc-latency-3981/latency-svc-2k6bk updated: 1 ports\nI0416 04:21:51.797560 1 service.go:301] Service svc-latency-3981/latency-svc-wp8xr updated: 1 ports\nI0416 04:21:51.837840 1 service.go:301] Service svc-latency-3981/latency-svc-lrd6g updated: 1 ports\nI0416 04:21:51.873177 1 service.go:301] Service svc-latency-3981/latency-svc-2jvgg updated: 1 ports\nI0416 04:21:51.921048 1 service.go:301] Service svc-latency-3981/latency-svc-vv8q4 updated: 1 ports\nI0416 04:21:51.969553 1 service.go:301] Service svc-latency-3981/latency-svc-6drnp updated: 1 ports\nI0416 04:21:52.015451 1 service.go:301] Service svc-latency-3981/latency-svc-wvw6k updated: 1 ports\nI0416 04:21:52.015489 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wvw6k\" at 100.66.252.249:80/TCP\nI0416 04:21:52.015502 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wzbj8\" at 100.64.24.129:80/TCP\nI0416 04:21:52.015509 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-95lzs\" at 100.67.237.130:80/TCP\nI0416 04:21:52.015516 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-j8895\" at 100.66.80.236:80/TCP\nI0416 04:21:52.015523 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dr54h\" at 100.70.103.136:80/TCP\nI0416 04:21:52.015535 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-c627r\" at 100.70.202.147:80/TCP\nI0416 04:21:52.015546 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-6drnp\" at 100.69.125.43:80/TCP\nI0416 04:21:52.015555 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-dks2n\" at 100.66.128.94:80/TCP\nI0416 04:21:52.015562 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zhk72\" at 100.68.66.49:80/TCP\nI0416 04:21:52.015568 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-bk9mt\" at 100.67.199.27:80/TCP\nI0416 04:21:52.015575 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-k2chp\" at 100.71.97.153:80/TCP\nI0416 04:21:52.015583 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-92phz\" at 100.68.157.217:80/TCP\nI0416 04:21:52.015589 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-h5rwt\" at 100.69.93.81:80/TCP\nI0416 04:21:52.015597 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-rkxrw\" at 100.67.6.129:80/TCP\nI0416 04:21:52.015629 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-lrd6g\" at 100.67.38.52:80/TCP\nI0416 04:21:52.015640 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-qp4nv\" at 100.67.32.71:80/TCP\nI0416 04:21:52.015649 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-7wnrp\" at 100.67.107.151:80/TCP\nI0416 04:21:52.015660 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2k6bk\" at 100.65.47.68:80/TCP\nI0416 04:21:52.015669 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-wp8xr\" at 100.66.152.62:80/TCP\nI0416 04:21:52.015678 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2jvgg\" at 100.66.103.196:80/TCP\nI0416 04:21:52.015686 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vv8q4\" at 100.66.57.14:80/TCP\nI0416 04:21:52.015940 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:52.080173 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"64.670889ms\"\nI0416 04:21:52.117387 1 service.go:301] Service svc-latency-3981/latency-svc-2zdpq updated: 1 ports\nI0416 04:21:52.167530 1 service.go:301] Service svc-latency-3981/latency-svc-w9f2h updated: 1 ports\nI0416 04:21:52.222472 1 service.go:301] Service svc-latency-3981/latency-svc-v4227 updated: 1 ports\nI0416 04:21:52.268655 1 service.go:301] Service svc-latency-3981/latency-svc-x7hwk updated: 1 ports\nI0416 04:21:52.330956 1 service.go:301] Service svc-latency-3981/latency-svc-hdl8p updated: 1 ports\nI0416 04:21:52.366298 1 service.go:301] Service svc-latency-3981/latency-svc-b5hzv updated: 1 ports\nI0416 04:21:52.415636 1 service.go:301] Service svc-latency-3981/latency-svc-kr2z8 updated: 1 ports\nI0416 04:21:52.467675 1 service.go:301] Service svc-latency-3981/latency-svc-2lp2x updated: 1 ports\nI0416 04:21:52.531107 1 service.go:301] Service svc-latency-3981/latency-svc-sgq7s updated: 1 ports\nI0416 04:21:52.584749 1 service.go:301] Service svc-latency-3981/latency-svc-4f65r updated: 1 ports\nI0416 04:21:52.669293 1 service.go:301] Service svc-latency-3981/latency-svc-d7xgc updated: 1 ports\nI0416 04:21:52.717358 1 service.go:301] Service svc-latency-3981/latency-svc-szgj5 updated: 1 ports\nI0416 04:21:52.773196 1 service.go:301] Service svc-latency-3981/latency-svc-zwsbh updated: 1 ports\nI0416 04:21:52.823829 1 service.go:301] Service svc-latency-3981/latency-svc-j87cw updated: 1 ports\nI0416 04:21:52.868698 1 service.go:301] Service svc-latency-3981/latency-svc-b5snz updated: 1 ports\nI0416 04:21:52.925596 1 service.go:301] Service svc-latency-3981/latency-svc-lmc2v updated: 1 ports\nI0416 04:21:52.965811 1 service.go:301] Service svc-latency-3981/latency-svc-66nfb updated: 1 ports\nI0416 04:21:53.017043 1 service.go:301] Service svc-latency-3981/latency-svc-vmjrn updated: 1 ports\nI0416 04:21:53.017089 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2zdpq\" at 100.64.87.205:80/TCP\nI0416 04:21:53.017120 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-d7xgc\" at 100.70.87.162:80/TCP\nI0416 04:21:53.017133 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-b5snz\" at 100.70.43.94:80/TCP\nI0416 04:21:53.017146 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-66nfb\" at 100.64.78.162:80/TCP\nI0416 04:21:53.017154 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-vmjrn\" at 100.71.178.204:80/TCP\nI0416 04:21:53.017166 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-hdl8p\" at 100.71.21.219:80/TCP\nI0416 04:21:53.017188 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-b5hzv\" at 100.71.99.253:80/TCP\nI0416 04:21:53.017199 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kr2z8\" at 100.71.190.233:80/TCP\nI0416 04:21:53.017209 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-zwsbh\" at 100.64.3.41:80/TCP\nI0416 04:21:53.017221 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-lmc2v\" at 100.71.188.87:80/TCP\nI0416 04:21:53.017234 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4f65r\" at 100.70.255.200:80/TCP\nI0416 04:21:53.017248 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-szgj5\" at 100.64.66.132:80/TCP\nI0416 04:21:53.017277 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-j87cw\" at 100.68.122.83:80/TCP\nI0416 04:21:53.017287 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-w9f2h\" at 100.70.153.241:80/TCP\nI0416 04:21:53.017303 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-v4227\" at 100.67.241.152:80/TCP\nI0416 04:21:53.017313 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-x7hwk\" at 100.70.132.99:80/TCP\nI0416 04:21:53.017323 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2lp2x\" at 100.71.203.82:80/TCP\nI0416 04:21:53.017351 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sgq7s\" at 100.70.85.198:80/TCP\nI0416 04:21:53.017654 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:53.067311 1 service.go:301] Service svc-latency-3981/latency-svc-sspvt updated: 1 ports\nI0416 04:21:53.082290 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"65.178937ms\"\nI0416 04:21:53.124948 1 service.go:301] Service svc-latency-3981/latency-svc-b9qn6 updated: 1 ports\nI0416 04:21:53.168095 1 service.go:301] Service svc-latency-3981/latency-svc-8nwp6 updated: 1 ports\nI0416 04:21:53.217739 1 service.go:301] Service svc-latency-3981/latency-svc-4k86d updated: 1 ports\nI0416 04:21:53.267585 1 service.go:301] Service svc-latency-3981/latency-svc-9gtqh updated: 1 ports\nI0416 04:21:53.316341 1 service.go:301] Service svc-latency-3981/latency-svc-sn88h updated: 1 ports\nI0416 04:21:53.395741 1 service.go:301] Service svc-latency-3981/latency-svc-2clqv updated: 1 ports\nI0416 04:21:53.418950 1 service.go:301] Service svc-latency-3981/latency-svc-ccwh2 updated: 1 ports\nI0416 04:21:53.477992 1 service.go:301] Service svc-latency-3981/latency-svc-kfp8t updated: 1 ports\nI0416 04:21:53.517492 1 service.go:301] Service svc-latency-3981/latency-svc-l8znx updated: 1 ports\nI0416 04:21:53.643309 1 service.go:301] Service svc-latency-3981/latency-svc-kld75 updated: 1 ports\nI0416 04:21:54.027945 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-ccwh2\" at 100.70.95.200:80/TCP\nI0416 04:21:54.027995 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sspvt\" at 100.68.104.81:80/TCP\nI0416 04:21:54.028007 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-b9qn6\" at 100.64.142.192:80/TCP\nI0416 04:21:54.028017 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-9gtqh\" at 100.66.27.216:80/TCP\nI0416 04:21:54.028027 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-sn88h\" at 100.68.47.26:80/TCP\nI0416 04:21:54.028038 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-2clqv\" at 100.64.185.218:80/TCP\nI0416 04:21:54.028075 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-8nwp6\" at 100.68.212.250:80/TCP\nI0416 04:21:54.028089 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-4k86d\" at 100.64.148.54:80/TCP\nI0416 04:21:54.028106 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kfp8t\" at 100.71.84.228:80/TCP\nI0416 04:21:54.028121 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-l8znx\" at 100.68.234.147:80/TCP\nI0416 04:21:54.028150 1 service.go:416] Adding new service port \"svc-latency-3981/latency-svc-kld75\" at 100.68.208.69:80/TCP\nI0416 04:21:54.028555 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:54.106237 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"78.279446ms\"\nI0416 04:21:55.107120 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:55.212615 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"105.619665ms\"\nI0416 04:21:56.538710 1 service.go:301] Service webhook-4171/e2e-test-webhook updated: 1 ports\nI0416 04:21:56.538756 1 service.go:416] Adding new service port \"webhook-4171/e2e-test-webhook\" at 100.67.32.153:8443/TCP\nI0416 04:21:56.538884 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:56.628201 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"89.444381ms\"\nI0416 04:21:57.628858 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:57.734139 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"105.412958ms\"\nI0416 04:21:59.573334 1 service.go:301] Service svc-latency-3981/latency-svc-24kvw updated: 0 ports\nI0416 04:21:59.573378 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-24kvw\"\nI0416 04:21:59.573590 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:59.598390 1 service.go:301] Service svc-latency-3981/latency-svc-2clqv updated: 0 ports\nI0416 04:21:59.615846 1 service.go:301] Service svc-latency-3981/latency-svc-2jvgg updated: 0 ports\nI0416 04:21:59.662504 1 service.go:301] Service svc-latency-3981/latency-svc-2k6bk updated: 0 ports\nI0416 04:21:59.677162 1 service.go:301] Service svc-latency-3981/latency-svc-2lf5w updated: 0 ports\nI0416 04:21:59.696996 1 service.go:301] Service svc-latency-3981/latency-svc-2lp2x updated: 0 ports\nI0416 04:21:59.718454 1 service.go:301] Service svc-latency-3981/latency-svc-2w5ct updated: 0 ports\nI0416 04:21:59.734550 1 service.go:301] Service svc-latency-3981/latency-svc-2x2b2 updated: 0 ports\nI0416 04:21:59.748121 1 service.go:301] Service svc-latency-3981/latency-svc-2zdpq updated: 0 ports\nI0416 04:21:59.750253 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"176.862345ms\"\nI0416 04:21:59.750283 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2zdpq\"\nI0416 04:21:59.750297 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2clqv\"\nI0416 04:21:59.750307 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2jvgg\"\nI0416 04:21:59.750316 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2k6bk\"\nI0416 04:21:59.750326 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2lf5w\"\nI0416 04:21:59.750336 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2lp2x\"\nI0416 04:21:59.750349 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2w5ct\"\nI0416 04:21:59.750362 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-2x2b2\"\nI0416 04:21:59.750617 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:21:59.772419 1 service.go:301] Service svc-latency-3981/latency-svc-444nw updated: 0 ports\nI0416 04:21:59.790665 1 service.go:301] Service svc-latency-3981/latency-svc-44wwt updated: 0 ports\nI0416 04:21:59.797978 1 service.go:301] Service svc-latency-3981/latency-svc-46zh6 updated: 0 ports\nI0416 04:21:59.816783 1 service.go:301] Service svc-latency-3981/latency-svc-49w6x updated: 0 ports\nI0416 04:21:59.833350 1 service.go:301] Service svc-latency-3981/latency-svc-4bncr updated: 0 ports\nI0416 04:21:59.845259 1 service.go:301] Service svc-latency-3981/latency-svc-4dzlb updated: 0 ports\nI0416 04:21:59.860597 1 service.go:301] Service svc-latency-3981/latency-svc-4f65r updated: 0 ports\nI0416 04:21:59.872964 1 service.go:301] Service svc-latency-3981/latency-svc-4k86d updated: 0 ports\nI0416 04:21:59.887147 1 service.go:301] Service svc-latency-3981/latency-svc-4mdx5 updated: 0 ports\nI0416 04:21:59.899800 1 service.go:301] Service svc-latency-3981/latency-svc-4sxrn updated: 0 ports\nI0416 04:21:59.908200 1 service.go:301] Service svc-latency-3981/latency-svc-4wd85 updated: 0 ports\nI0416 04:21:59.915205 1 service.go:301] Service svc-latency-3981/latency-svc-56brr updated: 0 ports\nI0416 04:21:59.923298 1 service.go:301] Service svc-latency-3981/latency-svc-5d7hs updated: 0 ports\nI0416 04:21:59.937450 1 service.go:301] Service svc-latency-3981/latency-svc-5dll9 updated: 0 ports\nI0416 04:21:59.947897 1 service.go:301] Service svc-latency-3981/latency-svc-5jdjd updated: 0 ports\nI0416 04:21:59.957375 1 service.go:301] Service svc-latency-3981/latency-svc-5t6sm updated: 0 ports\nI0416 04:21:59.962853 1 service.go:301] Service svc-latency-3981/latency-svc-66nfb updated: 0 ports\nI0416 04:21:59.979206 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"228.903187ms\"\nI0416 04:21:59.987693 1 service.go:301] Service svc-latency-3981/latency-svc-6d8r4 updated: 0 ports\nI0416 04:21:59.997142 1 service.go:301] Service svc-latency-3981/latency-svc-6drnp updated: 0 ports\nI0416 04:22:00.017485 1 service.go:301] Service svc-latency-3981/latency-svc-6kkhg updated: 0 ports\nI0416 04:22:00.025833 1 service.go:301] Service svc-latency-3981/latency-svc-6kn8x updated: 0 ports\nI0416 04:22:00.034213 1 service.go:301] Service svc-latency-3981/latency-svc-6pnn8 updated: 0 ports\nI0416 04:22:00.039376 1 service.go:301] Service svc-latency-3981/latency-svc-6txzc updated: 0 ports\nI0416 04:22:00.045920 1 service.go:301] Service svc-latency-3981/latency-svc-7fk9l updated: 0 ports\nI0416 04:22:00.052673 1 service.go:301] Service svc-latency-3981/latency-svc-7gg2n updated: 0 ports\nI0416 04:22:00.060026 1 service.go:301] Service svc-latency-3981/latency-svc-7mgwx updated: 0 ports\nI0416 04:22:00.067997 1 service.go:301] Service svc-latency-3981/latency-svc-7vw8t updated: 0 ports\nI0416 04:22:00.086147 1 service.go:301] Service svc-latency-3981/latency-svc-7wnrp updated: 0 ports\nI0416 04:22:00.099369 1 service.go:301] Service svc-latency-3981/latency-svc-86r5n updated: 0 ports\nI0416 04:22:00.111108 1 service.go:301] Service svc-latency-3981/latency-svc-8f5zr updated: 0 ports\nI0416 04:22:00.128789 1 service.go:301] Service svc-latency-3981/latency-svc-8jtd7 updated: 0 ports\nI0416 04:22:00.157745 1 service.go:301] Service svc-latency-3981/latency-svc-8kwfl updated: 0 ports\nI0416 04:22:00.173894 1 service.go:301] Service svc-latency-3981/latency-svc-8nwp6 updated: 0 ports\nI0416 04:22:00.187380 1 service.go:301] Service svc-latency-3981/latency-svc-8v8st updated: 0 ports\nI0416 04:22:00.193168 1 service.go:301] Service svc-latency-3981/latency-svc-92phz updated: 0 ports\nI0416 04:22:00.209762 1 service.go:301] Service svc-latency-3981/latency-svc-9422l updated: 0 ports\nI0416 04:22:00.218767 1 service.go:301] Service svc-latency-3981/latency-svc-95lzs updated: 0 ports\nI0416 04:22:00.230640 1 service.go:301] Service svc-latency-3981/latency-svc-9fnrp updated: 0 ports\nI0416 04:22:00.238824 1 service.go:301] Service svc-latency-3981/latency-svc-9gtqh updated: 0 ports\nI0416 04:22:00.246066 1 service.go:301] Service svc-latency-3981/latency-svc-9jdjb updated: 0 ports\nI0416 04:22:00.252168 1 service.go:301] Service svc-latency-3981/latency-svc-9nx8j updated: 0 ports\nI0416 04:22:00.261329 1 service.go:301] Service svc-latency-3981/latency-svc-9rjdq updated: 0 ports\nI0416 04:22:00.269851 1 service.go:301] Service svc-latency-3981/latency-svc-9w255 updated: 0 ports\nI0416 04:22:00.292830 1 service.go:301] Service svc-latency-3981/latency-svc-b5hzv updated: 0 ports\nI0416 04:22:00.300225 1 service.go:301] Service svc-latency-3981/latency-svc-b5snz updated: 0 ports\nI0416 04:22:00.306829 1 service.go:301] Service svc-latency-3981/latency-svc-b9qn6 updated: 0 ports\nI0416 04:22:00.315514 1 service.go:301] Service svc-latency-3981/latency-svc-bk9mt updated: 0 ports\nI0416 04:22:00.323786 1 service.go:301] Service svc-latency-3981/latency-svc-bkp2m updated: 0 ports\nI0416 04:22:00.342772 1 service.go:301] Service svc-latency-3981/latency-svc-bxzdt updated: 0 ports\nI0416 04:22:00.354481 1 service.go:301] Service svc-latency-3981/latency-svc-c627r updated: 0 ports\nI0416 04:22:00.363980 1 service.go:301] Service svc-latency-3981/latency-svc-c8p2l updated: 0 ports\nI0416 04:22:00.371334 1 service.go:301] Service svc-latency-3981/latency-svc-ccwh2 updated: 0 ports\nI0416 04:22:00.381608 1 service.go:301] Service svc-latency-3981/latency-svc-cz7hv updated: 0 ports\nI0416 04:22:00.389532 1 service.go:301] Service svc-latency-3981/latency-svc-czjpm updated: 0 ports\nI0416 04:22:00.405519 1 service.go:301] Service svc-latency-3981/latency-svc-d2xrl updated: 0 ports\nI0416 04:22:00.416908 1 service.go:301] Service svc-latency-3981/latency-svc-d7rjl updated: 0 ports\nI0416 04:22:00.423675 1 service.go:301] Service svc-latency-3981/latency-svc-d7xgc updated: 0 ports\nI0416 04:22:00.432792 1 service.go:301] Service svc-latency-3981/latency-svc-d9q85 updated: 0 ports\nI0416 04:22:00.440547 1 service.go:301] Service svc-latency-3981/latency-svc-dcljc updated: 0 ports\nI0416 04:22:00.449891 1 service.go:301] Service svc-latency-3981/latency-svc-dks2n updated: 0 ports\nI0416 04:22:00.458007 1 service.go:301] Service svc-latency-3981/latency-svc-dr54h updated: 0 ports\nI0416 04:22:00.465602 1 service.go:301] Service svc-latency-3981/latency-svc-dtb4m updated: 0 ports\nI0416 04:22:00.478646 1 service.go:301] Service svc-latency-3981/latency-svc-f299t updated: 0 ports\nI0416 04:22:00.486371 1 service.go:301] Service svc-latency-3981/latency-svc-fbqps updated: 0 ports\nI0416 04:22:00.499701 1 service.go:301] Service svc-latency-3981/latency-svc-fcc8g updated: 0 ports\nI0416 04:22:00.520586 1 service.go:301] Service svc-latency-3981/latency-svc-fglqx updated: 0 ports\nI0416 04:22:00.529577 1 service.go:301] Service svc-latency-3981/latency-svc-fpts9 updated: 0 ports\nI0416 04:22:00.536875 1 service.go:301] Service svc-latency-3981/latency-svc-fw4ht updated: 0 ports\nI0416 04:22:00.549766 1 service.go:301] Service svc-latency-3981/latency-svc-fzc9p updated: 0 ports\nI0416 04:22:00.597850 1 service.go:301] Service svc-latency-3981/latency-svc-g248w updated: 0 ports\nI0416 04:22:00.597890 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8kwfl\"\nI0416 04:22:00.597908 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9rjdq\"\nI0416 04:22:00.597916 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-b9qn6\"\nI0416 04:22:00.597924 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-bxzdt\"\nI0416 04:22:00.597932 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-cz7hv\"\nI0416 04:22:00.597941 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fcc8g\"\nI0416 04:22:00.597950 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-444nw\"\nI0416 04:22:00.597956 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-49w6x\"\nI0416 04:22:00.597966 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4bncr\"\nI0416 04:22:00.597977 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fglqx\"\nI0416 04:22:00.597984 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fw4ht\"\nI0416 04:22:00.597991 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4f65r\"\nI0416 04:22:00.597999 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7fk9l\"\nI0416 04:22:00.598006 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-ccwh2\"\nI0416 04:22:00.598013 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d7xgc\"\nI0416 04:22:00.598020 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dcljc\"\nI0416 04:22:00.598028 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-f299t\"\nI0416 04:22:00.598035 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g248w\"\nI0416 04:22:00.598041 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-56brr\"\nI0416 04:22:00.598049 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-86r5n\"\nI0416 04:22:00.598057 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-95lzs\"\nI0416 04:22:00.598066 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9fnrp\"\nI0416 04:22:00.598074 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fzc9p\"\nI0416 04:22:00.598083 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5t6sm\"\nI0416 04:22:00.598091 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6drnp\"\nI0416 04:22:00.598098 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6kn8x\"\nI0416 04:22:00.598106 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-c627r\"\nI0416 04:22:00.598113 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dr54h\"\nI0416 04:22:00.598121 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-66nfb\"\nI0416 04:22:00.598129 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7mgwx\"\nI0416 04:22:00.598141 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-46zh6\"\nI0416 04:22:00.598151 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4sxrn\"\nI0416 04:22:00.598158 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5dll9\"\nI0416 04:22:00.598166 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9w255\"\nI0416 04:22:00.598174 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dks2n\"\nI0416 04:22:00.598181 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-czjpm\"\nI0416 04:22:00.598189 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4k86d\"\nI0416 04:22:00.598196 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6pnn8\"\nI0416 04:22:00.598205 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7vw8t\"\nI0416 04:22:00.598213 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9gtqh\"\nI0416 04:22:00.598221 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9jdjb\"\nI0416 04:22:00.598228 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-bkp2m\"\nI0416 04:22:00.598236 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7gg2n\"\nI0416 04:22:00.598243 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-7wnrp\"\nI0416 04:22:00.598253 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8jtd7\"\nI0416 04:22:00.598261 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9422l\"\nI0416 04:22:00.598269 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-44wwt\"\nI0416 04:22:00.598278 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4dzlb\"\nI0416 04:22:00.598285 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-b5snz\"\nI0416 04:22:00.598294 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-bk9mt\"\nI0416 04:22:00.598304 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4mdx5\"\nI0416 04:22:00.598313 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-92phz\"\nI0416 04:22:00.598322 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-4wd85\"\nI0416 04:22:00.598330 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8f5zr\"\nI0416 04:22:00.598337 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-9nx8j\"\nI0416 04:22:00.598345 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d7rjl\"\nI0416 04:22:00.598352 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-dtb4m\"\nI0416 04:22:00.598360 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fbqps\"\nI0416 04:22:00.598368 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6txzc\"\nI0416 04:22:00.598376 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-b5hzv\"\nI0416 04:22:00.598386 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d2xrl\"\nI0416 04:22:00.598394 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-fpts9\"\nI0416 04:22:00.598402 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8nwp6\"\nI0416 04:22:00.598411 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-8v8st\"\nI0416 04:22:00.598419 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5d7hs\"\nI0416 04:22:00.598426 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-5jdjd\"\nI0416 04:22:00.598433 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-c8p2l\"\nI0416 04:22:00.598441 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6d8r4\"\nI0416 04:22:00.598449 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-6kkhg\"\nI0416 04:22:00.598457 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-d9q85\"\nI0416 04:22:00.598716 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:00.640576 1 service.go:301] Service svc-latency-3981/latency-svc-g2th9 updated: 0 ports\nI0416 04:22:00.674879 1 service.go:301] Service svc-latency-3981/latency-svc-g6dmz updated: 0 ports\nI0416 04:22:00.696061 1 service.go:301] Service svc-latency-3981/latency-svc-g6t7j updated: 0 ports\nI0416 04:22:00.706203 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"108.300046ms\"\nI0416 04:22:00.709615 1 service.go:301] Service svc-latency-3981/latency-svc-g85nc updated: 0 ports\nI0416 04:22:00.730412 1 service.go:301] Service svc-latency-3981/latency-svc-g947j updated: 0 ports\nI0416 04:22:00.748419 1 service.go:301] Service svc-latency-3981/latency-svc-gdvbv updated: 0 ports\nI0416 04:22:00.772913 1 service.go:301] Service svc-latency-3981/latency-svc-ggwzr updated: 0 ports\nI0416 04:22:00.788072 1 service.go:301] Service svc-latency-3981/latency-svc-gmxj9 updated: 0 ports\nI0416 04:22:00.798743 1 service.go:301] Service svc-latency-3981/latency-svc-gz4hw updated: 0 ports\nI0416 04:22:00.815097 1 service.go:301] Service svc-latency-3981/latency-svc-h5rwt updated: 0 ports\nI0416 04:22:00.821469 1 service.go:301] Service svc-latency-3981/latency-svc-h9jzs updated: 0 ports\nI0416 04:22:00.841362 1 service.go:301] Service svc-latency-3981/latency-svc-hdl8p updated: 0 ports\nI0416 04:22:00.849513 1 service.go:301] Service svc-latency-3981/latency-svc-hkppp updated: 0 ports\nI0416 04:22:00.856356 1 service.go:301] Service svc-latency-3981/latency-svc-hn2xn updated: 0 ports\nI0416 04:22:00.864151 1 service.go:301] Service svc-latency-3981/latency-svc-hsjnr updated: 0 ports\nI0416 04:22:00.872159 1 service.go:301] Service svc-latency-3981/latency-svc-j87cw updated: 0 ports\nI0416 04:22:00.879376 1 service.go:301] Service svc-latency-3981/latency-svc-j8895 updated: 0 ports\nI0416 04:22:00.890169 1 service.go:301] Service svc-latency-3981/latency-svc-j88m6 updated: 0 ports\nI0416 04:22:00.900870 1 service.go:301] Service svc-latency-3981/latency-svc-jnmlb updated: 0 ports\nI0416 04:22:00.913356 1 service.go:301] Service svc-latency-3981/latency-svc-jqc6f updated: 0 ports\nI0416 04:22:00.919802 1 service.go:301] Service svc-latency-3981/latency-svc-jrn8c updated: 0 ports\nI0416 04:22:00.931335 1 service.go:301] Service svc-latency-3981/latency-svc-k2chp updated: 0 ports\nI0416 04:22:00.945091 1 service.go:301] Service svc-latency-3981/latency-svc-k6gmx updated: 0 ports\nI0416 04:22:00.953825 1 service.go:301] Service svc-latency-3981/latency-svc-k74n8 updated: 0 ports\nI0416 04:22:00.960888 1 service.go:301] Service svc-latency-3981/latency-svc-kfp8t updated: 0 ports\nI0416 04:22:00.967684 1 service.go:301] Service svc-latency-3981/latency-svc-kjg25 updated: 0 ports\nI0416 04:22:00.975406 1 service.go:301] Service svc-latency-3981/latency-svc-kk9qz updated: 0 ports\nI0416 04:22:00.983714 1 service.go:301] Service svc-latency-3981/latency-svc-kld75 updated: 0 ports\nI0416 04:22:00.992458 1 service.go:301] Service svc-latency-3981/latency-svc-knbng updated: 0 ports\nI0416 04:22:01.000173 1 service.go:301] Service svc-latency-3981/latency-svc-kr2r7 updated: 0 ports\nI0416 04:22:01.007764 1 service.go:301] Service svc-latency-3981/latency-svc-kr2z8 updated: 0 ports\nI0416 04:22:01.016669 1 service.go:301] Service svc-latency-3981/latency-svc-kw5hl updated: 0 ports\nI0416 04:22:01.025057 1 service.go:301] Service svc-latency-3981/latency-svc-kwzpv updated: 0 ports\nI0416 04:22:01.031698 1 service.go:301] Service svc-latency-3981/latency-svc-l5k7g updated: 0 ports\nI0416 04:22:01.038988 1 service.go:301] Service svc-latency-3981/latency-svc-l6j5h updated: 0 ports\nI0416 04:22:01.059476 1 service.go:301] Service svc-latency-3981/latency-svc-l8znx updated: 0 ports\nI0416 04:22:01.067159 1 service.go:301] Service svc-latency-3981/latency-svc-lhtjh updated: 0 ports\nI0416 04:22:01.079848 1 service.go:301] Service svc-latency-3981/latency-svc-lmc2v updated: 0 ports\nI0416 04:22:01.085905 1 service.go:301] Service svc-latency-3981/latency-svc-lrd6g updated: 0 ports\nI0416 04:22:01.094963 1 service.go:301] Service svc-latency-3981/latency-svc-m9cps updated: 0 ports\nI0416 04:22:01.104591 1 service.go:301] Service svc-latency-3981/latency-svc-mj5h5 updated: 0 ports\nI0416 04:22:01.112509 1 service.go:301] Service svc-latency-3981/latency-svc-mk26h updated: 0 ports\nI0416 04:22:01.122781 1 service.go:301] Service svc-latency-3981/latency-svc-msttv updated: 0 ports\nI0416 04:22:01.138242 1 service.go:301] Service svc-latency-3981/latency-svc-n2tgb updated: 0 ports\nI0416 04:22:01.144668 1 service.go:301] Service svc-latency-3981/latency-svc-n8knn updated: 0 ports\nI0416 04:22:01.153655 1 service.go:301] Service svc-latency-3981/latency-svc-nmb8h updated: 0 ports\nI0416 04:22:01.165853 1 service.go:301] Service svc-latency-3981/latency-svc-p7zbc updated: 0 ports\nI0416 04:22:01.172796 1 service.go:301] Service svc-latency-3981/latency-svc-p9h96 updated: 0 ports\nI0416 04:22:01.180522 1 service.go:301] Service svc-latency-3981/latency-svc-pgcfp updated: 0 ports\nI0416 04:22:01.188660 1 service.go:301] Service svc-latency-3981/latency-svc-pjccn updated: 0 ports\nI0416 04:22:01.196225 1 service.go:301] Service svc-latency-3981/latency-svc-pkvbd updated: 0 ports\nI0416 04:22:01.206274 1 service.go:301] Service svc-latency-3981/latency-svc-pq8ml updated: 0 ports\nI0416 04:22:01.213947 1 service.go:301] Service svc-latency-3981/latency-svc-q4257 updated: 0 ports\nI0416 04:22:01.220805 1 service.go:301] Service svc-latency-3981/latency-svc-q4kv2 updated: 0 ports\nI0416 04:22:01.228665 1 service.go:301] Service svc-latency-3981/latency-svc-q6x9d updated: 0 ports\nI0416 04:22:01.238968 1 service.go:301] Service svc-latency-3981/latency-svc-q7xh9 updated: 0 ports\nI0416 04:22:01.245650 1 service.go:301] Service svc-latency-3981/latency-svc-qgpmd updated: 0 ports\nI0416 04:22:01.255295 1 service.go:301] Service svc-latency-3981/latency-svc-qjn2p updated: 0 ports\nI0416 04:22:01.263290 1 service.go:301] Service svc-latency-3981/latency-svc-qkpxm updated: 0 ports\nI0416 04:22:01.294140 1 service.go:301] Service svc-latency-3981/latency-svc-qmd8f updated: 0 ports\nI0416 04:22:01.312443 1 service.go:301] Service svc-latency-3981/latency-svc-qp4nv updated: 0 ports\nI0416 04:22:01.325442 1 service.go:301] Service svc-latency-3981/latency-svc-qrjc4 updated: 0 ports\nI0416 04:22:01.333204 1 service.go:301] Service svc-latency-3981/latency-svc-qrm7g updated: 0 ports\nI0416 04:22:01.339924 1 service.go:301] Service svc-latency-3981/latency-svc-qx9r9 updated: 0 ports\nI0416 04:22:01.347282 1 service.go:301] Service svc-latency-3981/latency-svc-qxhct updated: 0 ports\nI0416 04:22:01.356853 1 service.go:301] Service svc-latency-3981/latency-svc-r7jf4 updated: 0 ports\nI0416 04:22:01.365616 1 service.go:301] Service svc-latency-3981/latency-svc-r7mp2 updated: 0 ports\nI0416 04:22:01.385955 1 service.go:301] Service svc-latency-3981/latency-svc-r8wlh updated: 0 ports\nI0416 04:22:01.396255 1 service.go:301] Service svc-latency-3981/latency-svc-rdtjw updated: 0 ports\nI0416 04:22:01.403780 1 service.go:301] Service svc-latency-3981/latency-svc-rkxrw updated: 0 ports\nI0416 04:22:01.410666 1 service.go:301] Service svc-latency-3981/latency-svc-rqlv9 updated: 0 ports\nI0416 04:22:01.422532 1 service.go:301] Service svc-latency-3981/latency-svc-sgq7s updated: 0 ports\nI0416 04:22:01.438640 1 service.go:301] Service svc-latency-3981/latency-svc-sjbwr updated: 0 ports\nI0416 04:22:01.453214 1 service.go:301] Service svc-latency-3981/latency-svc-sn88h updated: 0 ports\nI0416 04:22:01.477535 1 service.go:301] Service svc-latency-3981/latency-svc-snpps updated: 0 ports\nI0416 04:22:01.537305 1 service.go:301] Service svc-latency-3981/latency-svc-sspvt updated: 0 ports\nI0416 04:22:01.557565 1 service.go:301] Service svc-latency-3981/latency-svc-sx4wq updated: 0 ports\nI0416 04:22:01.573914 1 service.go:301] Service svc-latency-3981/latency-svc-szgj5 updated: 0 ports\nI0416 04:22:01.574024 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sspvt\"\nI0416 04:22:01.574037 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kw5hl\"\nI0416 04:22:01.574043 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-nmb8h\"\nI0416 04:22:01.574049 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pgcfp\"\nI0416 04:22:01.574055 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q4kv2\"\nI0416 04:22:01.574060 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q6x9d\"\nI0416 04:22:01.574078 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qrm7g\"\nI0416 04:22:01.574086 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g947j\"\nI0416 04:22:01.574093 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kfp8t\"\nI0416 04:22:01.574098 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-knbng\"\nI0416 04:22:01.574103 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sjbwr\"\nI0416 04:22:01.574108 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sn88h\"\nI0416 04:22:01.574113 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hdl8p\"\nI0416 04:22:01.574118 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-j8895\"\nI0416 04:22:01.574124 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-gdvbv\"\nI0416 04:22:01.574129 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kr2z8\"\nI0416 04:22:01.574134 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-lmc2v\"\nI0416 04:22:01.574139 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-p7zbc\"\nI0416 04:22:01.574144 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sx4wq\"\nI0416 04:22:01.574152 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-szgj5\"\nI0416 04:22:01.574160 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g85nc\"\nI0416 04:22:01.574167 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-gmxj9\"\nI0416 04:22:01.574175 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-mj5h5\"\nI0416 04:22:01.574180 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-n2tgb\"\nI0416 04:22:01.574185 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pkvbd\"\nI0416 04:22:01.574190 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q7xh9\"\nI0416 04:22:01.574194 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g2th9\"\nI0416 04:22:01.574199 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kjg25\"\nI0416 04:22:01.574204 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-rkxrw\"\nI0416 04:22:01.574208 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g6t7j\"\nI0416 04:22:01.574213 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-j88m6\"\nI0416 04:22:01.574218 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-l5k7g\"\nI0416 04:22:01.574223 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-lrd6g\"\nI0416 04:22:01.574227 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-r7jf4\"\nI0416 04:22:01.574234 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hkppp\"\nI0416 04:22:01.574242 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hsjnr\"\nI0416 04:22:01.574249 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-jqc6f\"\nI0416 04:22:01.574255 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-m9cps\"\nI0416 04:22:01.574262 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-snpps\"\nI0416 04:22:01.574275 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-g6dmz\"\nI0416 04:22:01.574280 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-gz4hw\"\nI0416 04:22:01.574285 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-hn2xn\"\nI0416 04:22:01.574290 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kr2r7\"\nI0416 04:22:01.574294 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-l6j5h\"\nI0416 04:22:01.574299 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-ggwzr\"\nI0416 04:22:01.574304 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pjccn\"\nI0416 04:22:01.574309 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qkpxm\"\nI0416 04:22:01.574315 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qx9r9\"\nI0416 04:22:01.574323 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-rdtjw\"\nI0416 04:22:01.574330 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-h5rwt\"\nI0416 04:22:01.574335 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kwzpv\"\nI0416 04:22:01.574340 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-lhtjh\"\nI0416 04:22:01.574344 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-q4257\"\nI0416 04:22:01.574349 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-r8wlh\"\nI0416 04:22:01.574354 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-h9jzs\"\nI0416 04:22:01.574358 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-k74n8\"\nI0416 04:22:01.574363 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qgpmd\"\nI0416 04:22:01.574367 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qmd8f\"\nI0416 04:22:01.574374 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qrjc4\"\nI0416 04:22:01.574379 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-rqlv9\"\nI0416 04:22:01.574384 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-r7mp2\"\nI0416 04:22:01.574390 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kk9qz\"\nI0416 04:22:01.574397 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-kld75\"\nI0416 04:22:01.574404 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-l8znx\"\nI0416 04:22:01.574411 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-msttv\"\nI0416 04:22:01.574416 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qjn2p\"\nI0416 04:22:01.574420 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qxhct\"\nI0416 04:22:01.574425 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-k2chp\"\nI0416 04:22:01.574430 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-k6gmx\"\nI0416 04:22:01.574434 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-mk26h\"\nI0416 04:22:01.574439 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-pq8ml\"\nI0416 04:22:01.574444 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-qp4nv\"\nI0416 04:22:01.574449 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-j87cw\"\nI0416 04:22:01.574537 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-jnmlb\"\nI0416 04:22:01.574546 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-jrn8c\"\nI0416 04:22:01.574551 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-n8knn\"\nI0416 04:22:01.574556 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-p9h96\"\nI0416 04:22:01.574561 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-sgq7s\"\nI0416 04:22:01.574718 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:01.588627 1 service.go:301] Service svc-latency-3981/latency-svc-t7sb8 updated: 0 ports\nI0416 04:22:01.609597 1 service.go:301] Service svc-latency-3981/latency-svc-t9ht8 updated: 0 ports\nI0416 04:22:01.619941 1 service.go:301] Service svc-latency-3981/latency-svc-tfgb8 updated: 0 ports\nI0416 04:22:01.630901 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.864607ms\"\nI0416 04:22:01.652073 1 service.go:301] Service svc-latency-3981/latency-svc-tmz8w updated: 0 ports\nI0416 04:22:01.652168 1 service.go:301] Service svc-latency-3981/latency-svc-tnsf2 updated: 0 ports\nI0416 04:22:01.676549 1 service.go:301] Service svc-latency-3981/latency-svc-v28nm updated: 0 ports\nI0416 04:22:01.689039 1 service.go:301] Service svc-latency-3981/latency-svc-v4227 updated: 0 ports\nI0416 04:22:01.708303 1 service.go:301] Service svc-latency-3981/latency-svc-v9ngm updated: 0 ports\nI0416 04:22:01.723947 1 service.go:301] Service svc-latency-3981/latency-svc-vcpfc updated: 0 ports\nI0416 04:22:01.733826 1 service.go:301] Service svc-latency-3981/latency-svc-vhd5k updated: 0 ports\nI0416 04:22:01.743250 1 service.go:301] Service svc-latency-3981/latency-svc-vmjrn updated: 0 ports\nI0416 04:22:01.752403 1 service.go:301] Service svc-latency-3981/latency-svc-vv8q4 updated: 0 ports\nI0416 04:22:01.761479 1 service.go:301] Service svc-latency-3981/latency-svc-w8pmh updated: 0 ports\nI0416 04:22:01.770721 1 service.go:301] Service svc-latency-3981/latency-svc-w9f2h updated: 0 ports\nI0416 04:22:01.784071 1 service.go:301] Service svc-latency-3981/latency-svc-wb26g updated: 0 ports\nI0416 04:22:01.793510 1 service.go:301] Service svc-latency-3981/latency-svc-wc4b7 updated: 0 ports\nI0416 04:22:01.805158 1 service.go:301] Service svc-latency-3981/latency-svc-wd4zt updated: 0 ports\nI0416 04:22:01.825850 1 service.go:301] Service svc-latency-3981/latency-svc-wp8xr updated: 0 ports\nI0416 04:22:01.836324 1 service.go:301] Service svc-latency-3981/latency-svc-wv8nx updated: 0 ports\nI0416 04:22:01.849393 1 service.go:301] Service svc-latency-3981/latency-svc-wvw6k updated: 0 ports\nI0416 04:22:01.865921 1 service.go:301] Service svc-latency-3981/latency-svc-wx822 updated: 0 ports\nI0416 04:22:01.878668 1 service.go:301] Service svc-latency-3981/latency-svc-wzbj8 updated: 0 ports\nI0416 04:22:01.895946 1 service.go:301] Service svc-latency-3981/latency-svc-x2k7p updated: 0 ports\nI0416 04:22:01.902302 1 service.go:301] Service svc-latency-3981/latency-svc-x7hwk updated: 0 ports\nI0416 04:22:01.911748 1 service.go:301] Service svc-latency-3981/latency-svc-x88zw updated: 0 ports\nI0416 04:22:01.925053 1 service.go:301] Service svc-latency-3981/latency-svc-xct4m updated: 0 ports\nI0416 04:22:01.943423 1 service.go:301] Service svc-latency-3981/latency-svc-xf99p updated: 0 ports\nI0416 04:22:01.957604 1 service.go:301] Service svc-latency-3981/latency-svc-xfghb updated: 0 ports\nI0416 04:22:01.968577 1 service.go:301] Service svc-latency-3981/latency-svc-xh99l updated: 0 ports\nI0416 04:22:01.981873 1 service.go:301] Service svc-latency-3981/latency-svc-xjbfs updated: 0 ports\nI0416 04:22:01.992562 1 service.go:301] Service svc-latency-3981/latency-svc-xjlnp updated: 0 ports\nI0416 04:22:02.003720 1 service.go:301] Service svc-latency-3981/latency-svc-xkcrs updated: 0 ports\nI0416 04:22:02.010722 1 service.go:301] Service svc-latency-3981/latency-svc-xkfsd updated: 0 ports\nI0416 04:22:02.017922 1 service.go:301] Service svc-latency-3981/latency-svc-xmxvf updated: 0 ports\nI0416 04:22:02.037635 1 service.go:301] Service svc-latency-3981/latency-svc-xrnwp updated: 0 ports\nI0416 04:22:02.046583 1 service.go:301] Service svc-latency-3981/latency-svc-z48q8 updated: 0 ports\nI0416 04:22:02.053300 1 service.go:301] Service svc-latency-3981/latency-svc-z5ghz updated: 0 ports\nI0416 04:22:02.060279 1 service.go:301] Service svc-latency-3981/latency-svc-z7k5t updated: 0 ports\nI0416 04:22:02.068749 1 service.go:301] Service svc-latency-3981/latency-svc-zbd7h updated: 0 ports\nI0416 04:22:02.075679 1 service.go:301] Service svc-latency-3981/latency-svc-zhk72 updated: 0 ports\nI0416 04:22:02.083417 1 service.go:301] Service svc-latency-3981/latency-svc-zpx4h updated: 0 ports\nI0416 04:22:02.091008 1 service.go:301] Service svc-latency-3981/latency-svc-zvshw updated: 0 ports\nI0416 04:22:02.099281 1 service.go:301] Service svc-latency-3981/latency-svc-zwsbh updated: 0 ports\nI0416 04:22:02.106017 1 service.go:301] Service svc-latency-3981/latency-svc-zwvpj updated: 0 ports\nI0416 04:22:02.574386 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wv8nx\"\nI0416 04:22:02.574510 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-x2k7p\"\nI0416 04:22:02.574539 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-x88zw\"\nI0416 04:22:02.574564 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xfghb\"\nI0416 04:22:02.574584 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-z7k5t\"\nI0416 04:22:02.574604 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wzbj8\"\nI0416 04:22:02.574625 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xkfsd\"\nI0416 04:22:02.574651 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-t9ht8\"\nI0416 04:22:02.574679 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-v28nm\"\nI0416 04:22:02.574701 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-v9ngm\"\nI0416 04:22:02.574724 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xjbfs\"\nI0416 04:22:02.574745 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xmxvf\"\nI0416 04:22:02.574767 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-z48q8\"\nI0416 04:22:02.574788 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-tmz8w\"\nI0416 04:22:02.574816 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-v4227\"\nI0416 04:22:02.574840 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vhd5k\"\nI0416 04:22:02.574859 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vmjrn\"\nI0416 04:22:02.574881 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-w9f2h\"\nI0416 04:22:02.574903 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zpx4h\"\nI0416 04:22:02.574933 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zwsbh\"\nI0416 04:22:02.574955 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zwvpj\"\nI0416 04:22:02.574976 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-tfgb8\"\nI0416 04:22:02.574997 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wc4b7\"\nI0416 04:22:02.575023 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wd4zt\"\nI0416 04:22:02.575045 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-x7hwk\"\nI0416 04:22:02.575066 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zbd7h\"\nI0416 04:22:02.575088 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xjlnp\"\nI0416 04:22:02.575107 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xkcrs\"\nI0416 04:22:02.575115 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vcpfc\"\nI0416 04:22:02.575124 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wb26g\"\nI0416 04:22:02.575133 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wp8xr\"\nI0416 04:22:02.575141 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wvw6k\"\nI0416 04:22:02.575152 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xct4m\"\nI0416 04:22:02.575164 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zhk72\"\nI0416 04:22:02.575173 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-t7sb8\"\nI0416 04:22:02.575183 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-tnsf2\"\nI0416 04:22:02.575190 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-w8pmh\"\nI0416 04:22:02.575198 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xf99p\"\nI0416 04:22:02.575206 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xh99l\"\nI0416 04:22:02.575216 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-vv8q4\"\nI0416 04:22:02.575226 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-wx822\"\nI0416 04:22:02.575235 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-xrnwp\"\nI0416 04:22:02.575244 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-z5ghz\"\nI0416 04:22:02.575253 1 service.go:441] Removing service port \"svc-latency-3981/latency-svc-zvshw\"\nI0416 04:22:02.576290 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:02.668574 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"94.212606ms\"\nI0416 04:22:03.669618 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:03.727032 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"57.693113ms\"\nI0416 04:22:05.901615 1 service.go:301] Service services-9070/affinity-clusterip-timeout updated: 1 ports\nI0416 04:22:05.901648 1 service.go:416] Adding new service port \"services-9070/affinity-clusterip-timeout\" at 100.70.111.217:80/TCP\nI0416 04:22:05.901706 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:05.938319 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.646781ms\"\nI0416 04:22:05.938563 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:05.975212 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.706494ms\"\nI0416 04:22:07.682625 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:07.716314 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.736657ms\"\nI0416 04:22:08.167589 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:08.265867 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"98.339588ms\"\nI0416 04:22:10.543341 1 service.go:301] Service services-6763/affinity-clusterip updated: 1 ports\nI0416 04:22:10.543388 1 service.go:416] Adding new service port \"services-6763/affinity-clusterip\" at 100.67.66.248:80/TCP\nI0416 04:22:10.543455 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:10.577026 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.639666ms\"\nI0416 04:22:10.577290 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:10.609645 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.431173ms\"\nI0416 04:22:11.699482 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:11.749785 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.37925ms\"\nI0416 04:22:12.750972 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:12.805704 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"54.848466ms\"\nI0416 04:22:13.805946 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:13.852554 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"46.700932ms\"\nI0416 04:22:15.276406 1 service.go:301] Service webhook-4171/e2e-test-webhook updated: 0 ports\nI0416 04:22:15.276528 1 service.go:441] Removing service port \"webhook-4171/e2e-test-webhook\"\nI0416 04:22:15.276683 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:15.334733 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"58.188688ms\"\nI0416 04:22:16.335462 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:16.377938 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.592553ms\"\nI0416 04:22:16.683926 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:16.728043 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"44.174129ms\"\nI0416 04:22:28.793991 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 1 ports\nI0416 04:22:28.794046 1 service.go:416] Adding new service port \"services-8414/affinity-clusterip-transition\" at 100.69.172.86:80/TCP\nI0416 04:22:28.794126 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:28.847523 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.467551ms\"\nI0416 04:22:28.847680 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:28.886639 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.074656ms\"\nI0416 04:22:30.849623 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:30.884860 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.353128ms\"\nI0416 04:22:31.455979 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:31.496160 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.292977ms\"\nI0416 04:22:32.205476 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:32.248853 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"43.473549ms\"\nI0416 04:22:37.724056 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:37.759686 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.778049ms\"\nI0416 04:22:37.759992 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:37.794726 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.86237ms\"\nI0416 04:22:39.278326 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:39.385378 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"107.149846ms\"\nI0416 04:22:39.887502 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:39.956392 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"69.035999ms\"\nI0416 04:22:40.737331 1 service.go:301] Service services-6763/affinity-clusterip updated: 0 ports\nI0416 04:22:40.737368 1 service.go:441] Removing service port \"services-6763/affinity-clusterip\"\nI0416 04:22:40.737534 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:40.769331 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.952812ms\"\nI0416 04:22:41.769641 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:41.833148 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"63.62979ms\"\nI0416 04:22:43.567802 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 1 ports\nI0416 04:22:43.567839 1 service.go:418] Updating existing service port \"services-8414/affinity-clusterip-transition\" at 100.69.172.86:80/TCP\nI0416 04:22:43.567946 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:43.608635 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.786166ms\"\nI0416 04:22:53.254426 1 service.go:301] Service webhook-4199/e2e-test-webhook updated: 1 ports\nI0416 04:22:53.254533 1 service.go:416] Adding new service port \"webhook-4199/e2e-test-webhook\" at 100.71.83.135:8443/TCP\nI0416 04:22:53.254720 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:53.285521 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.042306ms\"\nI0416 04:22:53.285791 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:53.325452 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.275444ms\"\nI0416 04:22:59.421503 1 service.go:301] Service webhook-4199/e2e-test-webhook updated: 0 ports\nI0416 04:22:59.421561 1 service.go:441] Removing service port \"webhook-4199/e2e-test-webhook\"\nI0416 04:22:59.421777 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:59.458327 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.755027ms\"\nI0416 04:22:59.458559 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:22:59.493214 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.780814ms\"\nI0416 04:23:00.935031 1 service.go:301] Service services-359/nodeport-collision-1 updated: 1 ports\nI0416 04:23:00.935068 1 service.go:416] Adding new service port \"services-359/nodeport-collision-1\" at 100.67.105.134:80/TCP\nI0416 04:23:00.935175 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:01.034346 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-359/nodeport-collision-1\\\" (:32446/tcp4)\"\nI0416 04:23:01.046253 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"110.948354ms\"\nI0416 04:23:01.420337 1 service.go:301] Service services-359/nodeport-collision-1 updated: 0 ports\nI0416 04:23:01.431880 1 service.go:441] Removing service port \"services-359/nodeport-collision-1\"\nI0416 04:23:01.432011 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:01.468079 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.187663ms\"\nI0416 04:23:01.682148 1 service.go:301] Service services-359/nodeport-collision-2 updated: 1 ports\nI0416 04:23:02.468902 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:02.579938 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"111.14142ms\"\nI0416 04:23:35.324802 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:35.392394 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"67.731825ms\"\nI0416 04:23:39.378264 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:39.417014 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.845397ms\"\nI0416 04:23:40.382437 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:40.423530 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.19673ms\"\nI0416 04:23:43.499844 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:43.552734 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"52.996717ms\"\nI0416 04:23:43.686367 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:43.860062 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"173.832027ms\"\nI0416 04:23:44.864361 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:44.917087 1 service.go:301] Service services-9070/affinity-clusterip-timeout updated: 0 ports\nI0416 04:23:44.928756 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"68.00053ms\"\nI0416 04:23:45.928908 1 service.go:441] Removing service port \"services-9070/affinity-clusterip-timeout\"\nI0416 04:23:45.929100 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:23:45.958855 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"29.950524ms\"\nI0416 04:24:02.847485 1 service.go:301] Service services-5577/nodeport-service updated: 1 ports\nI0416 04:24:02.847532 1 service.go:416] Adding new service port \"services-5577/nodeport-service\" at 100.67.230.241:80/TCP\nI0416 04:24:02.847719 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:03.028462 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-5577/nodeport-service\\\" (:32701/tcp4)\"\nI0416 04:24:03.037305 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"189.760241ms\"\nI0416 04:24:03.037469 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:03.088407 1 service.go:301] Service services-5577/externalsvc updated: 1 ports\nI0416 04:24:03.117305 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"79.937904ms\"\nI0416 04:24:04.118532 1 service.go:416] Adding new service port \"services-5577/externalsvc\" at 100.69.208.142:80/TCP\nI0416 04:24:04.118668 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:04.154442 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.938761ms\"\nI0416 04:24:11.982438 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:12.033112 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.768354ms\"\nI0416 04:24:21.864896 1 service.go:301] Service crd-webhook-3479/e2e-test-crd-conversion-webhook updated: 1 ports\nI0416 04:24:21.864937 1 service.go:416] Adding new service port \"crd-webhook-3479/e2e-test-crd-conversion-webhook\" at 100.66.12.242:9443/TCP\nI0416 04:24:21.865086 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:21.923682 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"58.672774ms\"\nI0416 04:24:21.923904 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:21.960521 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.796946ms\"\nI0416 04:24:24.102457 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:24.163228 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"60.863106ms\"\nI0416 04:24:25.294333 1 service.go:301] Service services-5577/nodeport-service updated: 0 ports\nI0416 04:24:25.294372 1 service.go:441] Removing service port \"services-5577/nodeport-service\"\nI0416 04:24:25.294465 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:25.339971 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.583794ms\"\nI0416 04:24:25.340237 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:25.376936 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.777363ms\"\nI0416 04:24:29.302316 1 service.go:301] Service crd-webhook-3479/e2e-test-crd-conversion-webhook updated: 0 ports\nI0416 04:24:29.302347 1 service.go:441] Removing service port \"crd-webhook-3479/e2e-test-crd-conversion-webhook\"\nI0416 04:24:29.302453 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:29.350821 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.453256ms\"\nI0416 04:24:29.352316 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:29.393848 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"42.811485ms\"\nI0416 04:24:35.188453 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:35.237547 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.307961ms\"\nI0416 04:24:35.237762 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:35.274227 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.635846ms\"\nI0416 04:24:39.805901 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:39.862328 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.581536ms\"\nI0416 04:24:40.216050 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:40.307358 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"91.443703ms\"\nI0416 04:24:40.982805 1 service.go:301] Service services-5577/externalsvc updated: 0 ports\nI0416 04:24:40.982837 1 service.go:441] Removing service port \"services-5577/externalsvc\"\nI0416 04:24:40.982975 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:41.023061 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.207815ms\"\nI0416 04:24:42.023783 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:24:42.117162 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"93.571229ms\"\nI0416 04:25:00.685278 1 service.go:301] Service webhook-8589/e2e-test-webhook updated: 1 ports\nI0416 04:25:00.685329 1 service.go:416] Adding new service port \"webhook-8589/e2e-test-webhook\" at 100.71.236.143:8443/TCP\nI0416 04:25:00.685481 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:00.735402 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.060575ms\"\nI0416 04:25:00.735574 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:00.791171 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"55.71547ms\"\nI0416 04:25:02.970089 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 1 ports\nI0416 04:25:02.970135 1 service.go:416] Adding new service port \"services-2658/affinity-nodeport-transition\" at 100.66.195.203:80/TCP\nI0416 04:25:02.970268 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:02.997896 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2658/affinity-nodeport-transition\\\" (:32112/tcp4)\"\nI0416 04:25:03.003717 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.46344ms\"\nI0416 04:25:03.003875 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:03.042621 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.856683ms\"\nI0416 04:25:03.573133 1 service.go:301] Service webhook-8589/e2e-test-webhook updated: 0 ports\nI0416 04:25:04.043282 1 service.go:441] Removing service port \"webhook-8589/e2e-test-webhook\"\nI0416 04:25:04.043791 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:04.115338 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"72.092535ms\"\nI0416 04:25:05.116396 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:05.152837 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.562273ms\"\nI0416 04:25:05.187817 1 service.go:301] Service webhook-8648/e2e-test-webhook updated: 1 ports\nI0416 04:25:05.354316 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 1 ports\nI0416 04:25:06.153354 1 service.go:416] Adding new service port \"webhook-8648/e2e-test-webhook\" at 100.70.43.41:8443/TCP\nI0416 04:25:06.153382 1 service.go:418] Updating existing service port \"services-8414/affinity-clusterip-transition\" at 100.69.172.86:80/TCP\nI0416 04:25:06.153511 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:06.204804 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.475511ms\"\nI0416 04:25:07.571089 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:07.607095 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.106981ms\"\nI0416 04:25:12.335624 1 service.go:301] Service webhook-8648/e2e-test-webhook updated: 0 ports\nI0416 04:25:12.335664 1 service.go:441] Removing service port \"webhook-8648/e2e-test-webhook\"\nI0416 04:25:12.335789 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:12.378899 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"43.221127ms\"\nI0416 04:25:12.379029 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:12.414366 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"35.428163ms\"\nI0416 04:25:13.358873 1 service.go:301] Service webhook-82/e2e-test-webhook updated: 1 ports\nI0416 04:25:13.358953 1 service.go:416] Adding new service port \"webhook-82/e2e-test-webhook\" at 100.70.200.135:8443/TCP\nI0416 04:25:13.359299 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:13.496230 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"137.273203ms\"\nI0416 04:25:14.496607 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:14.552624 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.127477ms\"\nI0416 04:25:18.561547 1 service.go:301] Service webhook-82/e2e-test-webhook updated: 0 ports\nI0416 04:25:18.561578 1 service.go:441] Removing service port \"webhook-82/e2e-test-webhook\"\nI0416 04:25:18.561676 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:18.779409 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"217.814467ms\"\nI0416 04:25:18.779588 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:18.848023 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"68.566725ms\"\nI0416 04:25:23.909944 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 1 ports\nI0416 04:25:23.909990 1 service.go:418] Updating existing service port \"services-2658/affinity-nodeport-transition\" at 100.66.195.203:80/TCP\nI0416 04:25:23.910113 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:23.946168 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.171769ms\"\nI0416 04:25:25.438558 1 service.go:301] Service webhook-6678/e2e-test-webhook updated: 1 ports\nI0416 04:25:25.438595 1 service.go:416] Adding new service port \"webhook-6678/e2e-test-webhook\" at 100.69.157.77:8443/TCP\nI0416 04:25:25.438690 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:25.478001 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.395985ms\"\nI0416 04:25:25.478227 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:25.514798 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.701091ms\"\nI0416 04:25:26.229181 1 service.go:301] Service dns-6992/test-service-2 updated: 1 ports\nI0416 04:25:26.517862 1 service.go:416] Adding new service port \"dns-6992/test-service-2:http\" at 100.71.247.254:80/TCP\nI0416 04:25:26.518008 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:26.668100 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"150.271785ms\"\nI0416 04:25:26.872624 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 1 ports\nI0416 04:25:27.674213 1 service.go:418] Updating existing service port \"services-2658/affinity-nodeport-transition\" at 100.66.195.203:80/TCP\nI0416 04:25:27.674355 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:27.828122 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"153.932382ms\"\nI0416 04:25:28.570390 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:28.627099 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.822235ms\"\nI0416 04:25:30.360951 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:30.395748 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.743398ms\"\nI0416 04:25:31.655131 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:31.693486 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.45103ms\"\nI0416 04:25:31.752500 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:31.770771 1 service.go:301] Service webhook-6678/e2e-test-webhook updated: 0 ports\nI0416 04:25:31.788624 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.55949ms\"\nI0416 04:25:32.287196 1 service.go:301] Service services-2658/affinity-nodeport-transition updated: 0 ports\nI0416 04:25:32.789856 1 service.go:441] Removing service port \"webhook-6678/e2e-test-webhook\"\nI0416 04:25:32.789971 1 service.go:441] Removing service port \"services-2658/affinity-nodeport-transition\"\nI0416 04:25:32.790191 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:32.823167 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.313844ms\"\nI0416 04:25:35.110742 1 service.go:301] Service webhook-8817/e2e-test-webhook updated: 1 ports\nI0416 04:25:35.110805 1 service.go:416] Adding new service port \"webhook-8817/e2e-test-webhook\" at 100.67.61.52:8443/TCP\nI0416 04:25:35.110913 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:35.176292 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"65.475419ms\"\nI0416 04:25:35.176453 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:35.241215 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"64.86414ms\"\nI0416 04:25:40.256648 1 service.go:301] Service webhook-8817/e2e-test-webhook updated: 0 ports\nI0416 04:25:40.256684 1 service.go:441] Removing service port \"webhook-8817/e2e-test-webhook\"\nI0416 04:25:40.256802 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:40.304598 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.890506ms\"\nI0416 04:25:40.304936 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:25:40.349958 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"45.153705ms\"\nI0416 04:26:07.868759 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:07.948784 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"80.185735ms\"\nI0416 04:26:07.948957 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:08.002398 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"53.562779ms\"\nI0416 04:26:08.101292 1 service.go:301] Service dns-6992/test-service-2 updated: 0 ports\nI0416 04:26:09.002551 1 service.go:441] Removing service port \"dns-6992/test-service-2:http\"\nI0416 04:26:09.002717 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:09.052142 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"49.58919ms\"\nI0416 04:26:36.421914 1 service.go:301] Service services-2800/nodeport-reuse updated: 1 ports\nI0416 04:26:36.421953 1 service.go:416] Adding new service port \"services-2800/nodeport-reuse\" at 100.71.109.134:80/TCP\nI0416 04:26:36.422066 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:36.449647 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2800/nodeport-reuse\\\" (:30893/tcp4)\"\nI0416 04:26:36.454468 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.505021ms\"\nI0416 04:26:36.454633 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:36.495787 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.227474ms\"\nI0416 04:26:36.657439 1 service.go:301] Service services-2800/nodeport-reuse updated: 0 ports\nI0416 04:26:37.495957 1 service.go:441] Removing service port \"services-2800/nodeport-reuse\"\nI0416 04:26:37.496077 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:37.532464 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.498097ms\"\nI0416 04:26:43.972150 1 service.go:301] Service services-2800/nodeport-reuse updated: 1 ports\nI0416 04:26:43.972196 1 service.go:416] Adding new service port \"services-2800/nodeport-reuse\" at 100.68.243.209:80/TCP\nI0416 04:26:43.972283 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:43.998684 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for services-2800/nodeport-reuse\\\" (:30893/tcp4)\"\nI0416 04:26:44.002749 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"30.539247ms\"\nI0416 04:26:44.003042 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:44.034206 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.253705ms\"\nI0416 04:26:44.207086 1 service.go:301] Service services-2800/nodeport-reuse updated: 0 ports\nI0416 04:26:45.034968 1 service.go:441] Removing service port \"services-2800/nodeport-reuse\"\nI0416 04:26:45.035119 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:45.076329 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"41.359569ms\"\nI0416 04:26:47.042169 1 service.go:301] Service services-3168/externalname-service updated: 1 ports\nI0416 04:26:47.042241 1 service.go:416] Adding new service port \"services-3168/externalname-service:http\" at 100.67.116.29:80/TCP\nI0416 04:26:47.042370 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:47.078446 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"36.200795ms\"\nI0416 04:26:47.078581 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:47.128618 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"50.128364ms\"\nI0416 04:26:49.300229 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:49.331471 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.381263ms\"\nI0416 04:26:49.963413 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:26:49.997771 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.433493ms\"\nI0416 04:27:36.818248 1 service.go:301] Service services-3168/externalname-service updated: 0 ports\nI0416 04:27:36.818287 1 service.go:441] Removing service port \"services-3168/externalname-service:http\"\nI0416 04:27:36.818411 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:36.918893 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"100.580285ms\"\nI0416 04:27:36.919050 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:37.047294 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"128.354872ms\"\nI0416 04:27:44.646195 1 service.go:301] Service webhook-1600/e2e-test-webhook updated: 1 ports\nI0416 04:27:44.646244 1 service.go:416] Adding new service port \"webhook-1600/e2e-test-webhook\" at 100.68.126.30:8443/TCP\nI0416 04:27:44.646344 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:44.687063 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.784609ms\"\nI0416 04:27:44.687292 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:44.725652 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.543783ms\"\nI0416 04:27:44.826012 1 service.go:301] Service conntrack-2375/svc-udp updated: 1 ports\nI0416 04:27:45.725814 1 service.go:416] Adding new service port \"conntrack-2375/svc-udp:udp\" at 100.65.237.14:80/UDP\nI0416 04:27:45.725916 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:45.752876 1 proxier.go:1355] \"Opened local port\" port=\"\\\"nodePort for conntrack-2375/svc-udp:udp\\\" (:30651/udp4)\"\nI0416 04:27:45.758370 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"32.571002ms\"\nI0416 04:27:47.902185 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:47.902442 1 service.go:301] Service webhook-1600/e2e-test-webhook updated: 0 ports\nI0416 04:27:47.958662 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.584378ms\"\nI0416 04:27:47.958697 1 service.go:441] Removing service port \"webhook-1600/e2e-test-webhook\"\nI0416 04:27:47.958815 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:48.042092 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"83.373348ms\"\nI0416 04:27:56.656251 1 proxier.go:830] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-2375/svc-udp:udp\" clusterIP=\"100.65.237.14\"\nI0416 04:27:56.656453 1 proxier.go:840] Stale udp service NodePort conntrack-2375/svc-udp:udp -> 30651\nI0416 04:27:56.656545 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:27:56.703221 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"47.095634ms\"\nI0416 04:28:02.166678 1 service.go:301] Service conntrack-2986/boom-server updated: 1 ports\nI0416 04:28:02.174564 1 service.go:416] Adding new service port \"conntrack-2986/boom-server\" at 100.66.119.142:9000/TCP\nI0416 04:28:02.176126 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:02.379071 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"212.321076ms\"\nI0416 04:28:02.380699 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:02.695446 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"315.47836ms\"\nI0416 04:28:15.857845 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:15.906652 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.923712ms\"\nI0416 04:28:17.589342 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:17.668530 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"79.31229ms\"\nI0416 04:28:17.668722 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:17.728907 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"60.30236ms\"\nI0416 04:28:20.178490 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:20.253866 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"75.507532ms\"\nI0416 04:28:20.254053 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:20.324936 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"71.007127ms\"\nI0416 04:28:21.667674 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:21.771210 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"103.690192ms\"\nI0416 04:28:22.771434 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:22.804857 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.513032ms\"\nI0416 04:28:23.640488 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:23.688394 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"48.017277ms\"\nI0416 04:28:24.295945 1 service.go:301] Service services-8414/affinity-clusterip-transition updated: 0 ports\nI0416 04:28:24.296202 1 service.go:441] Removing service port \"services-8414/affinity-clusterip-transition\"\nI0416 04:28:24.296391 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:24.328063 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.851439ms\"\nI0416 04:28:25.331315 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:25.369491 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.293187ms\"\nI0416 04:28:31.632916 1 service.go:301] Service webhook-4004/e2e-test-webhook updated: 1 ports\nI0416 04:28:31.632948 1 service.go:416] Adding new service port \"webhook-4004/e2e-test-webhook\" at 100.70.223.98:8443/TCP\nI0416 04:28:31.633019 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:31.664786 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.825968ms\"\nI0416 04:28:31.665033 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:31.698927 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.973194ms\"\nI0416 04:28:34.526779 1 service.go:301] Service webhook-7035/e2e-test-webhook updated: 1 ports\nI0416 04:28:34.528601 1 service.go:416] Adding new service port \"webhook-7035/e2e-test-webhook\" at 100.65.14.118:8443/TCP\nI0416 04:28:34.528748 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:34.605010 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"76.414729ms\"\nI0416 04:28:34.605239 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:34.656867 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"51.814938ms\"\nI0416 04:28:35.013609 1 service.go:301] Service conntrack-2375/svc-udp updated: 0 ports\nI0416 04:28:35.657164 1 service.go:441] Removing service port \"conntrack-2375/svc-udp:udp\"\nI0416 04:28:35.657370 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:35.696807 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.629918ms\"\nI0416 04:28:37.684675 1 service.go:301] Service webhook-7035/e2e-test-webhook updated: 0 ports\nI0416 04:28:37.684708 1 service.go:441] Removing service port \"webhook-7035/e2e-test-webhook\"\nI0416 04:28:37.684817 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:37.718013 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.292906ms\"\nI0416 04:28:37.718754 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:37.752131 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.068059ms\"\nI0416 04:28:37.754922 1 service.go:301] Service webhook-4004/e2e-test-webhook updated: 0 ports\nI0416 04:28:38.752379 1 service.go:441] Removing service port \"webhook-4004/e2e-test-webhook\"\nI0416 04:28:38.752509 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:38.783543 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"31.168603ms\"\nI0416 04:28:48.831198 1 service.go:301] Service services-7969/clusterip-service updated: 1 ports\nI0416 04:28:48.831296 1 service.go:416] Adding new service port \"services-7969/clusterip-service\" at 100.71.97.64:80/TCP\nI0416 04:28:48.831415 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:48.869901 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.60449ms\"\nI0416 04:28:48.870193 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:48.910324 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"40.251932ms\"\nI0416 04:28:49.077112 1 service.go:301] Service services-7969/externalsvc updated: 1 ports\nI0416 04:28:49.911416 1 service.go:416] Adding new service port \"services-7969/externalsvc\" at 100.69.229.108:80/TCP\nI0416 04:28:49.911545 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:49.944566 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"33.164394ms\"\nI0416 04:28:50.945230 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:51.041277 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"96.152171ms\"\nI0416 04:28:52.044369 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:52.079559 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"38.136046ms\"\nI0416 04:28:53.287389 1 service.go:301] Service services-7969/clusterip-service updated: 0 ports\nI0416 04:28:53.287430 1 service.go:441] Removing service port \"services-7969/clusterip-service\"\nI0416 04:28:53.287654 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:53.343932 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"56.478164ms\"\nI0416 04:28:54.344726 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:28:54.379531 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"34.861576ms\"\nI0416 04:29:01.280101 1 service.go:301] Service webhook-1322/e2e-test-webhook updated: 1 ports\nI0416 04:29:01.280150 1 service.go:416] Adding new service port \"webhook-1322/e2e-test-webhook\" at 100.71.177.220:8443/TCP\nI0416 04:29:01.280250 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:01.319840 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"39.682063ms\"\nI0416 04:29:01.320020 1 proxier.go:846] \"Syncing iptables rules\"\nI0416 04:29:01.422385 1 proxier.go:813] \"SyncProxyRules complete\" elapsed=\"102.30553ms\"\nI0416 04:29:03.110206 1 proxier.go:846] \"Syncing