Error lines from build-log.txt
... skipping 153 lines ...
I0615 03:16:49.705648 5673 common.go:152] Using cluster name:
I0615 03:16:49.705698 5673 http.go:37] curl https://storage.googleapis.com/kubernetes-release/release/stable.txt
I0615 03:16:49.771975 5673 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0615 03:16:49.774084 5673 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.24.0-beta.2+v1.24.0-beta.1-105-g7e065ff541/linux/amd64/kops
I0615 03:16:50.449851 5673 up.go:44] Cleaning up any leaked resources from previous cluster
I0615 03:16:50.449963 5673 dumplogs.go:45] /logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kops toolbox dump --name e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0615 03:16:50.953716 5673 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0615 03:16:50.953780 5673 down.go:48] /logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kops delete cluster --name e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --yes
I0615 03:16:50.973335 5706 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0615 03:16:50.973440 5706 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io" not found
I0615 03:16:51.458628 5673 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/06/15 03:16:51 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0615 03:16:51.468549 5673 http.go:37] curl https://ip.jsb.workers.dev
I0615 03:16:51.566017 5673 up.go:156] /logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kops create cluster --name e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.24.1 --ssh-public-key /tmp/kops/e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220610 --channel=alpha --networking=amazonvpc --container-runtime=containerd --node-size=t3.large --discovery-store=s3://k8s-kops-prow/discovery --admin-access 35.224.12.48/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones sa-east-1a --master-size c5.large
I0615 03:16:51.587677 5718 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0615 03:16:51.587974 5718 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
I0615 03:16:51.616282 5718 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/id_ed25519.pub
I0615 03:16:52.123896 5718 new_cluster.go:1168] Cloud Provider ID = aws
... skipping 548 lines ...
I0615 03:17:28.552851 5673 up.go:240] /logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kops validate cluster --name e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0615 03:17:28.575591 5757 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0615 03:17:28.575744 5757 featureflag.go:164] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io
W0615 03:17:29.978722 5757 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0615 03:17:40.027996 5757 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:17:50.062511 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:18:00.113658 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:18:10.149378 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:18:20.190617 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
W0615 03:18:30.224120 5757 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:18:40.260552 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:18:50.302243 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:19:00.337947 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:19:10.378753 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:19:20.419653 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:19:30.456141 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:19:40.492653 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:19:50.535221 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:20:00.577494 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:20:10.618052 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:20:20.655592 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:20:30.700879 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:20:40.737659 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:20:50.775298 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:21:00.826086 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0615 03:21:10.860886 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
... skipping 23 lines ...
Pod kube-system/ebs-csi-controller-75d8d4d556-z8zm8 system-cluster-critical pod "ebs-csi-controller-75d8d4d556-z8zm8" is pending
Pod kube-system/ebs-csi-node-6v7hs system-node-critical pod "ebs-csi-node-6v7hs" is pending
Pod kube-system/ebs-csi-node-px4ds system-node-critical pod "ebs-csi-node-px4ds" is pending
Pod kube-system/ebs-csi-node-q59cv system-node-critical pod "ebs-csi-node-q59cv" is pending
Pod kube-system/ebs-csi-node-snmvj system-node-critical pod "ebs-csi-node-snmvj" is pending
Validation Failed
W0615 03:21:24.363584 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
... skipping 25 lines ...
Pod kube-system/ebs-csi-node-px4ds system-node-critical pod "ebs-csi-node-px4ds" is pending
Pod kube-system/ebs-csi-node-q59cv system-node-critical pod "ebs-csi-node-q59cv" is pending
Pod kube-system/ebs-csi-node-snmvj system-node-critical pod "ebs-csi-node-snmvj" is pending
Pod kube-system/kube-controller-manager-i-020fc75861952cd2c system-cluster-critical pod "kube-controller-manager-i-020fc75861952cd2c" is pending
Pod kube-system/kube-scheduler-i-020fc75861952cd2c system-cluster-critical pod "kube-scheduler-i-020fc75861952cd2c" is pending
Validation Failed
W0615 03:21:36.760437 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
... skipping 13 lines ...
Pod kube-system/ebs-csi-controller-75d8d4d556-z8zm8 system-cluster-critical pod "ebs-csi-controller-75d8d4d556-z8zm8" is pending
Pod kube-system/ebs-csi-node-6v7hs system-node-critical pod "ebs-csi-node-6v7hs" is pending
Pod kube-system/ebs-csi-node-px4ds system-node-critical pod "ebs-csi-node-px4ds" is pending
Pod kube-system/ebs-csi-node-q59cv system-node-critical pod "ebs-csi-node-q59cv" is pending
Pod kube-system/ebs-csi-node-snmvj system-node-critical pod "ebs-csi-node-snmvj" is pending
Validation Failed
W0615 03:21:49.329446 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
... skipping 9 lines ...
KIND NAME MESSAGE
Pod kube-system/coredns-57d68fdf4b-w22gv system-cluster-critical pod "coredns-57d68fdf4b-w22gv" is pending
Pod kube-system/ebs-csi-controller-75d8d4d556-t7xxb system-cluster-critical pod "ebs-csi-controller-75d8d4d556-t7xxb" is pending
Pod kube-system/ebs-csi-controller-75d8d4d556-z8zm8 system-cluster-critical pod "ebs-csi-controller-75d8d4d556-z8zm8" is pending
Pod kube-system/ebs-csi-node-6v7hs system-node-critical pod "ebs-csi-node-6v7hs" is pending
Validation Failed
W0615 03:22:01.773539 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
... skipping 6 lines ...
i-0b28fcd2505512be6 node True
VALIDATION ERRORS
KIND NAME MESSAGE
Pod kube-system/ebs-csi-controller-75d8d4d556-t7xxb system-cluster-critical pod "ebs-csi-controller-75d8d4d556-t7xxb" is pending
Validation Failed
W0615 03:22:14.376054 5757 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-sa-east-1a Master c5.large 1 1 sa-east-1a
nodes-sa-east-1a Node t3.large 4 4 sa-east-1a
... skipping 600 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: block]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 585 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:24:55.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pod-os-rejection-1498" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:24:56.254: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 49 lines ...
[sig-storage] CSI Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: csi-hostpath]
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver "csi-hostpath" does not support topology - skipping[0m
test/e2e/storage/testsuites/topology.go:93
[90m------------------------------[0m
... skipping 65 lines ...
[1mSTEP[0m: Destroying namespace "services-771" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:760
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 33 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:24:57.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-2179" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":1,"skipped":22,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:24:57.464: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
Jun 15 03:24:54.873: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename job
W0615 03:24:55.449348 6630 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 15 03:24:55.449: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail when exceeds active deadline
test/e2e/apps/job.go:293
[1mSTEP[0m: Creating a job
[1mSTEP[0m: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
test/e2e/framework/framework.go:188
Jun 15 03:24:58.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "job-4666" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":1,"skipped":18,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 17 lines ...
[32m• [SLOW TEST:9.027 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:03.804: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
Jun 15 03:24:55.264: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test substitution in container's command
Jun 15 03:24:55.988: INFO: Waiting up to 5m0s for pod "var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878" in namespace "var-expansion-2906" to be "Succeeded or Failed"
Jun 15 03:24:56.134: INFO: Pod "var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878": Phase="Pending", Reason="", readiness=false. Elapsed: 146.304216ms
Jun 15 03:24:58.279: INFO: Pod "var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290913171s
Jun 15 03:25:00.424: INFO: Pod "var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43586061s
Jun 15 03:25:02.568: INFO: Pod "var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580176492s
Jun 15 03:25:04.715: INFO: Pod "var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.726474741s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:04.715: INFO: Pod "var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878" satisfied condition "Succeeded or Failed"
Jun 15 03:25:04.859: INFO: Trying to get logs from node i-05fe3937684c9d649 pod var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:05.164: INFO: Waiting for pod var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878 to disappear
Jun 15 03:25:05.310: INFO: Pod var-expansion-bdf6b6cf-1fe8-4339-9490-921f32a65878 no longer exists
[AfterEach] [sig-node] Variable Expansion
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:11.063 seconds][0m
[sig-node] Variable Expansion
[90mtest/e2e/common/node/framework.go:23[0m
should allow substituting values in a container's command [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:05.757: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:188
... skipping 25 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_downwardapi.go:93
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:24:56.014: INFO: Waiting up to 5m0s for pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34" in namespace "projected-2407" to be "Succeeded or Failed"
Jun 15 03:24:56.163: INFO: Pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34": Phase="Pending", Reason="", readiness=false. Elapsed: 149.186619ms
Jun 15 03:24:58.307: INFO: Pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292964034s
Jun 15 03:25:00.451: INFO: Pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436964439s
Jun 15 03:25:02.596: INFO: Pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58152035s
Jun 15 03:25:04.739: INFO: Pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72541862s
Jun 15 03:25:06.885: INFO: Pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.870529928s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:06.885: INFO: Pod "metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34" satisfied condition "Succeeded or Failed"
Jun 15 03:25:07.028: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:07.336: INFO: Waiting for pod metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34 to disappear
Jun 15 03:25:07.482: INFO: Pod metadata-volume-c9823928-e123-40d0-8b5e-047bcf8ccc34 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:13.206 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/projected_downwardapi.go:93[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 11 lines ...
[1mSTEP[0m: creating execpod-noendpoints on node i-08d19c5de9fb20ea1
Jun 15 03:24:56.411: INFO: Creating new exec pod
Jun 15 03:25:06.846: INFO: waiting up to 30s to connect to no-pods:80
[1mSTEP[0m: hitting service no-pods:80 from pod execpod-noendpoints on node i-08d19c5de9fb20ea1
Jun 15 03:25:06.846: INFO: Running '/logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kubectl --server=https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1407 exec execpod-noendpointskk9nb -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jun 15 03:25:08.303: INFO: rc: 1
Jun 15 03:25:08.304: INFO: error contained 'REFUSED', as expected: error running /logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kubectl --server=https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1407 exec execpod-noendpointskk9nb -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:
stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1
error:
exit status 1
[AfterEach] [sig-network] Services
test/e2e/framework/framework.go:188
Jun 15 03:25:08.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "services-1407" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
[32m• [SLOW TEST:13.784 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be rejected when no endpoints exist
[90mtest/e2e/network/service.go:1999[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":1,"skipped":9,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:08.616: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: hostPathSymlink]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 51 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-195f6764-16f1-4c5a-a5aa-7bf9091360ad
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:25:05.150: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5" in namespace "projected-7223" to be "Succeeded or Failed"
Jun 15 03:25:05.294: INFO: Pod "pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.006644ms
Jun 15 03:25:07.440: INFO: Pod "pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2899242s
Jun 15 03:25:09.586: INFO: Pod "pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435479526s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:09.586: INFO: Pod "pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5" satisfied condition "Succeeded or Failed"
Jun 15 03:25:09.731: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5 container projected-configmap-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:10.026: INFO: Waiting for pod pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5 to disappear
Jun 15 03:25:10.170: INFO: Pod pod-projected-configmaps-a555f4e4-eec8-49fe-b9ea-b3032c5e48e5 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.611 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] InitContainer [NodeConformance]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
W0615 03:24:55.358763 6671 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 15 03:24:55.358: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
test/e2e/common/node/init_container.go:164
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: creating the pod
Jun 15 03:24:55.934: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
test/e2e/framework/framework.go:188
Jun 15 03:25:11.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "init-container-2484" for this suite.
[32m• [SLOW TEST:16.786 seconds][0m
[sig-node] InitContainer [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:11.583: INFO: Only supported for providers [azure] (not aws)
... skipping 28 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/empty_dir.go:51
[It] new files should be created with FSGroup ownership when container is non-root
test/e2e/common/storage/empty_dir.go:60
[1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs
Jun 15 03:24:56.113: INFO: Waiting up to 5m0s for pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d" in namespace "emptydir-1101" to be "Succeeded or Failed"
Jun 15 03:24:56.260: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Pending", Reason="", readiness=false. Elapsed: 146.261412ms
Jun 15 03:24:58.405: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2919595s
Jun 15 03:25:00.551: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437631822s
Jun 15 03:25:02.699: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58511444s
Jun 15 03:25:04.844: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.730945402s
Jun 15 03:25:06.990: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.876957459s
Jun 15 03:25:09.136: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.022619705s
Jun 15 03:25:11.280: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.166925564s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:11.280: INFO: Pod "pod-0ae067a8-035c-492a-876a-e48c4d4f424d" satisfied condition "Succeeded or Failed"
Jun 15 03:25:11.424: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-0ae067a8-035c-492a-876a-e48c4d4f424d container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:11.722: INFO: Waiting for pod pod-0ae067a8-035c-492a-876a-e48c4d4f424d to disappear
Jun 15 03:25:11.866: INFO: Pod pod-0ae067a8-035c-492a-876a-e48c4d4f424d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 6 lines ...
[90mtest/e2e/common/storage/framework.go:23[0m
when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/empty_dir.go:49[0m
new files should be created with FSGroup ownership when container is non-root
[90mtest/e2e/common/storage/empty_dir.go:60[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":1,"skipped":8,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-2d64b2c8-38ff-4146-9fa0-e8cfc480cf38
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 15 03:25:07.120: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45" in namespace "projected-961" to be "Succeeded or Failed"
Jun 15 03:25:07.264: INFO: Pod "pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45": Phase="Pending", Reason="", readiness=false. Elapsed: 143.791731ms
Jun 15 03:25:09.409: INFO: Pod "pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288355382s
Jun 15 03:25:11.553: INFO: Pod "pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432713676s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:11.553: INFO: Pod "pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45" satisfied condition "Succeeded or Failed"
Jun 15 03:25:11.697: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45 container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:11.992: INFO: Waiting for pod pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45 to disappear
Jun 15 03:25:12.136: INFO: Pod pod-projected-secrets-ded07b8e-4b62-4b8c-b8d7-bb74d47d1c45 no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.608 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 117 lines ...
Jun 15 03:24:55.410: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium
Jun 15 03:24:56.138: INFO: Waiting up to 5m0s for pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11" in namespace "emptydir-5764" to be "Succeeded or Failed"
Jun 15 03:24:56.284: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 145.692435ms
Jun 15 03:24:58.431: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292078185s
Jun 15 03:25:00.577: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438266064s
Jun 15 03:25:02.723: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584260033s
Jun 15 03:25:04.869: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 8.730526665s
Jun 15 03:25:07.015: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 10.875985781s
Jun 15 03:25:09.160: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 13.021894571s
Jun 15 03:25:11.306: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Pending", Reason="", readiness=false. Elapsed: 15.167479181s
Jun 15 03:25:13.453: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.314468441s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:13.453: INFO: Pod "pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11" satisfied condition "Succeeded or Failed"
Jun 15 03:25:13.599: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:13.901: INFO: Waiting for pod pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11 to disappear
Jun 15 03:25:14.047: INFO: Pod pod-6a9f4c5f-e0d4-4406-9ea2-f8b2b3e3cf11 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:19.658 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected combined
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 3 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-493c96d9-1d12-4ef1-aa34-302cb68fd4cf
[1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-b0bae8bb-0014-4fb4-a7fc-f0ee528047a7
[1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin
Jun 15 03:25:09.376: INFO: Waiting up to 5m0s for pod "projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5" in namespace "projected-6753" to be "Succeeded or Failed"
Jun 15 03:25:09.520: INFO: Pod "projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5": Phase="Pending", Reason="", readiness=false. Elapsed: 143.519357ms
Jun 15 03:25:11.664: INFO: Pod "projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287480887s
Jun 15 03:25:13.809: INFO: Pod "projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432323983s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:13.809: INFO: Pod "projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5" satisfied condition "Succeeded or Failed"
Jun 15 03:25:13.953: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5 container projected-all-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:14.248: INFO: Waiting for pod projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5 to disappear
Jun 15 03:25:14.392: INFO: Pod projected-volume-b1f2dcb1-6aea-4ced-8119-84cc2d3a38e5 no longer exists
[AfterEach] [sig-storage] Projected combined
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.744 seconds][0m
[sig-storage] Projected combined
[90mtest/e2e/common/storage/framework.go:23[0m
should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:14.708: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 34 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:25:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubelet-test-4402" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:15.389: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:188
... skipping 55 lines ...
[32m• [SLOW TEST:21.122 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should be able to deny attaching pod [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 8 lines ...
[It] should support existing directory
test/e2e/storage/testsuites/subpath.go:207
Jun 15 03:24:55.982: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 15 03:24:55.982: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-vd7d
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:24:56.134: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vd7d" in namespace "provisioning-3334" to be "Succeeded or Failed"
Jun 15 03:24:56.279: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 144.931787ms
Jun 15 03:24:58.424: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290423584s
Jun 15 03:25:00.569: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435511099s
Jun 15 03:25:02.715: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581055379s
Jun 15 03:25:04.859: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725259996s
Jun 15 03:25:07.003: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.869251637s
Jun 15 03:25:09.148: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.01433227s
Jun 15 03:25:11.292: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.158277474s
Jun 15 03:25:13.437: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.303162217s
Jun 15 03:25:15.583: INFO: Pod "pod-subpath-test-inlinevolume-vd7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.448895984s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:15.583: INFO: Pod "pod-subpath-test-inlinevolume-vd7d" satisfied condition "Succeeded or Failed"
Jun 15 03:25:15.727: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-subpath-test-inlinevolume-vd7d container test-container-volume-inlinevolume-vd7d: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:16.020: INFO: Waiting for pod pod-subpath-test-inlinevolume-vd7d to disappear
Jun 15 03:25:16.163: INFO: Pod pod-subpath-test-inlinevolume-vd7d no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-vd7d
Jun 15 03:25:16.164: INFO: Deleting pod "pod-subpath-test-inlinevolume-vd7d" in namespace "provisioning-3334"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":9,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:16.929: INFO: Only supported for providers [vsphere] (not aws)
... skipping 54 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382
Jun 15 03:24:56.002: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 15 03:24:56.300: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-qnxs
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:24:56.472: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qnxs" in namespace "provisioning-135" to be "Succeeded or Failed"
Jun 15 03:24:56.616: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 143.766779ms
Jun 15 03:24:58.763: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290384764s
Jun 15 03:25:00.908: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435414269s
Jun 15 03:25:03.054: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581153135s
Jun 15 03:25:05.199: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726237816s
Jun 15 03:25:07.344: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.871411173s
Jun 15 03:25:09.490: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 13.017269545s
Jun 15 03:25:11.635: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 15.162391045s
Jun 15 03:25:13.780: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Pending", Reason="", readiness=false. Elapsed: 17.307412989s
Jun 15 03:25:15.924: INFO: Pod "pod-subpath-test-inlinevolume-qnxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.452055623s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:15.925: INFO: Pod "pod-subpath-test-inlinevolume-qnxs" satisfied condition "Succeeded or Failed"
Jun 15 03:25:16.068: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-inlinevolume-qnxs container test-container-subpath-inlinevolume-qnxs: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:16.371: INFO: Waiting for pod pod-subpath-test-inlinevolume-qnxs to disappear
Jun 15 03:25:16.514: INFO: Pod pod-subpath-test-inlinevolume-qnxs no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-qnxs
Jun 15 03:25:16.514: INFO: Deleting pod "pod-subpath-test-inlinevolume-qnxs" in namespace "provisioning-135"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":18,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] TTLAfterFinished
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 32 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:25:14.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14" in namespace "projected-841" to be "Succeeded or Failed"
Jun 15 03:25:14.271: INFO: Pod "downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14": Phase="Pending", Reason="", readiness=false. Elapsed: 143.949582ms
Jun 15 03:25:16.416: INFO: Pod "downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28927436s
Jun 15 03:25:18.573: INFO: Pod "downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.446542723s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:18.573: INFO: Pod "downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14" satisfied condition "Succeeded or Failed"
Jun 15 03:25:18.718: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:19.018: INFO: Waiting for pod downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14 to disappear
Jun 15 03:25:19.162: INFO: Pod downwardapi-volume-523e2550-1f69-48c2-8aca-17385d284f14 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.484 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:19.477: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
[36mOnly supported for providers [openstack] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:25:17.911: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:25:19.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-4745" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 31 lines ...
[32m• [SLOW TEST:13.928 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should mutate configmap [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:22.684: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 310 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Guestbook application
[90mtest/e2e/kubectl/kubectl.go:340[0m
should create and stop a working application [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:22.909: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 44 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:101
Jun 15 03:25:18.458: INFO: Waiting up to 5m0s for pod "busybox-user-0-6a71dab5-bf18-4e9c-b974-f64c73c55acc" in namespace "security-context-test-3584" to be "Succeeded or Failed"
Jun 15 03:25:18.602: INFO: Pod "busybox-user-0-6a71dab5-bf18-4e9c-b974-f64c73c55acc": Phase="Pending", Reason="", readiness=false. Elapsed: 144.033616ms
Jun 15 03:25:20.747: INFO: Pod "busybox-user-0-6a71dab5-bf18-4e9c-b974-f64c73c55acc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289342691s
Jun 15 03:25:22.892: INFO: Pod "busybox-user-0-6a71dab5-bf18-4e9c-b974-f64c73c55acc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434093915s
Jun 15 03:25:25.037: INFO: Pod "busybox-user-0-6a71dab5-bf18-4e9c-b974-f64c73c55acc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579770405s
Jun 15 03:25:25.038: INFO: Pod "busybox-user-0-6a71dab5-bf18-4e9c-b974-f64c73c55acc" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:188
Jun 15 03:25:25.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-3584" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a container with runAsUser
[90mtest/e2e/common/node/security_context.go:52[0m
should run the container with uid 0 [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/security_context.go:101[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:25.344: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/framework/framework.go:188
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-file]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:2077
[90m------------------------------[0m
... skipping 45 lines ...
[32m• [SLOW TEST:12.170 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should apply changes to a job status [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/common/node/sysctl.go:37
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 5 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/common/node/sysctl.go:67
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl
[1mSTEP[0m: Watching for error events or started pod
[1mSTEP[0m: Waiting for pod completion
[1mSTEP[0m: Checking that the pod succeeded
[1mSTEP[0m: Getting logs from the pod
[1mSTEP[0m: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.168 seconds][0m
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/framework.go:23[0m
should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:29.148: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 47 lines ...
Jun 15 03:25:16.486: INFO: PersistentVolumeClaim pvc-pcmdf found but phase is Pending instead of Bound.
Jun 15 03:25:18.632: INFO: PersistentVolumeClaim pvc-pcmdf found and phase=Bound (8.721855005s)
Jun 15 03:25:18.632: INFO: Waiting up to 3m0s for PersistentVolume local-np226 to have phase Bound
Jun 15 03:25:18.776: INFO: PersistentVolume local-np226 found and phase=Bound (144.287617ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-tv7q
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:25:19.215: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tv7q" in namespace "provisioning-3152" to be "Succeeded or Failed"
Jun 15 03:25:19.359: INFO: Pod "pod-subpath-test-preprovisionedpv-tv7q": Phase="Pending", Reason="", readiness=false. Elapsed: 143.944348ms
Jun 15 03:25:21.504: INFO: Pod "pod-subpath-test-preprovisionedpv-tv7q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28958694s
Jun 15 03:25:23.651: INFO: Pod "pod-subpath-test-preprovisionedpv-tv7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436256144s
Jun 15 03:25:25.796: INFO: Pod "pod-subpath-test-preprovisionedpv-tv7q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.581582564s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:25.796: INFO: Pod "pod-subpath-test-preprovisionedpv-tv7q" satisfied condition "Succeeded or Failed"
Jun 15 03:25:25.940: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-tv7q container test-container-subpath-preprovisionedpv-tv7q: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:26.235: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tv7q to disappear
Jun 15 03:25:26.378: INFO: Pod pod-subpath-test-preprovisionedpv-tv7q no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-tv7q
Jun 15 03:25:26.378: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tv7q" in namespace "provisioning-3152"
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:29.464: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/framework/framework.go:188
... skipping 51 lines ...
[1mSTEP[0m: Create set of pods
Jun 15 03:25:11.638: INFO: created test-pod-1
Jun 15 03:25:11.784: INFO: created test-pod-2
Jun 15 03:25:11.930: INFO: created test-pod-3
[1mSTEP[0m: waiting for all 3 pods to be running
Jun 15 03:25:11.930: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-4580' to be running and ready
Jun 15 03:25:12.363: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:12.363: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:12.363: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:12.363: INFO: 0 / 3 pods in namespace 'pods-4580' are running and ready (0 seconds elapsed)
Jun 15 03:25:12.363: INFO: expected 0 pod replicas in namespace 'pods-4580', 0 are Running and Ready.
Jun 15 03:25:12.363: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 15 03:25:12.363: INFO: test-pod-1 i-0b28fcd2505512be6 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:12.363: INFO: test-pod-2 i-0b28fcd2505512be6 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:12.363: INFO: test-pod-3 i-0b28fcd2505512be6 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:12.363: INFO:
Jun 15 03:25:14.796: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:14.796: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:14.796: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:14.796: INFO: 0 / 3 pods in namespace 'pods-4580' are running and ready (2 seconds elapsed)
Jun 15 03:25:14.796: INFO: expected 0 pod replicas in namespace 'pods-4580', 0 are Running and Ready.
Jun 15 03:25:14.796: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 15 03:25:14.796: INFO: test-pod-1 i-0b28fcd2505512be6 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:14.796: INFO: test-pod-2 i-0b28fcd2505512be6 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:14.796: INFO: test-pod-3 i-0b28fcd2505512be6 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:14.796: INFO:
Jun 15 03:25:16.797: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:16.797: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:16.797: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:16.797: INFO: 0 / 3 pods in namespace 'pods-4580' are running and ready (4 seconds elapsed)
Jun 15 03:25:16.797: INFO: expected 0 pod replicas in namespace 'pods-4580', 0 are Running and Ready.
Jun 15 03:25:16.797: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 15 03:25:16.797: INFO: test-pod-1 i-0b28fcd2505512be6 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:16.797: INFO: test-pod-2 i-0b28fcd2505512be6 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:16.797: INFO: test-pod-3 i-0b28fcd2505512be6 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:16.797: INFO:
Jun 15 03:25:18.798: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 15 03:25:18.798: INFO: 2 / 3 pods in namespace 'pods-4580' are running and ready (6 seconds elapsed)
Jun 15 03:25:18.798: INFO: expected 0 pod replicas in namespace 'pods-4580', 0 are Running and Ready.
Jun 15 03:25:18.798: INFO: POD NODE PHASE GRACE CONDITIONS
Jun 15 03:25:18.798: INFO: test-pod-3 i-0b28fcd2505512be6 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-15 03:25:11 +0000 UTC }]
Jun 15 03:25:18.798: INFO:
Jun 15 03:25:20.798: INFO: 3 / 3 pods in namespace 'pods-4580' are running and ready (8 seconds elapsed)
... skipping 17 lines ...
[32m• [SLOW TEST:20.053 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should delete a collection of pods [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:30.553: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 50 lines ...
Jun 15 03:25:17.328: INFO: PersistentVolumeClaim pvc-fdt2r found but phase is Pending instead of Bound.
Jun 15 03:25:19.472: INFO: PersistentVolumeClaim pvc-fdt2r found and phase=Bound (6.578796303s)
Jun 15 03:25:19.472: INFO: Waiting up to 3m0s for PersistentVolume local-vl4ff to have phase Bound
Jun 15 03:25:19.617: INFO: PersistentVolume local-vl4ff found and phase=Bound (144.522289ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-rxkm
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:25:20.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rxkm" in namespace "provisioning-6803" to be "Succeeded or Failed"
Jun 15 03:25:20.194: INFO: Pod "pod-subpath-test-preprovisionedpv-rxkm": Phase="Pending", Reason="", readiness=false. Elapsed: 144.015694ms
Jun 15 03:25:22.355: INFO: Pod "pod-subpath-test-preprovisionedpv-rxkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304264104s
Jun 15 03:25:24.501: INFO: Pod "pod-subpath-test-preprovisionedpv-rxkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450635751s
Jun 15 03:25:26.647: INFO: Pod "pod-subpath-test-preprovisionedpv-rxkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.596442973s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:26.647: INFO: Pod "pod-subpath-test-preprovisionedpv-rxkm" satisfied condition "Succeeded or Failed"
Jun 15 03:25:26.791: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-subpath-test-preprovisionedpv-rxkm container test-container-subpath-preprovisionedpv-rxkm: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:27.091: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rxkm to disappear
Jun 15 03:25:27.236: INFO: Pod pod-subpath-test-preprovisionedpv-rxkm no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-rxkm
Jun 15 03:25:27.236: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rxkm" in namespace "provisioning-6803"
... skipping 65 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:25:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "endpointslice-9091" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-ca01a7d2-ccc5-4670-a847-13054ab786f3
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:25:24.110: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066" in namespace "configmap-219" to be "Succeeded or Failed"
Jun 15 03:25:24.255: INFO: Pod "pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066": Phase="Pending", Reason="", readiness=false. Elapsed: 144.447928ms
Jun 15 03:25:26.400: INFO: Pod "pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289194332s
Jun 15 03:25:28.545: INFO: Pod "pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434975078s
Jun 15 03:25:30.690: INFO: Pod "pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579104792s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:30.690: INFO: Pod "pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066" satisfied condition "Succeeded or Failed"
Jun 15 03:25:30.833: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:31.129: INFO: Waiting for pod pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066 to disappear
Jun 15 03:25:31.273: INFO: Pod pod-configmaps-ac301e5b-7738-4aac-9920-e63b64ed1066 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.750 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] Subpath
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:24:56.988: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename subpath
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with projected pod [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating pod pod-subpath-test-projected-nbm9
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 15 03:24:58.453: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nbm9" in namespace "subpath-3811" to be "Succeeded or Failed"
Jun 15 03:24:58.597: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 144.213424ms
Jun 15 03:25:00.743: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289689072s
Jun 15 03:25:02.889: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43594877s
Jun 15 03:25:05.034: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581286466s
Jun 15 03:25:07.179: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Running", Reason="", readiness=true. Elapsed: 8.725639223s
Jun 15 03:25:09.326: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Running", Reason="", readiness=true. Elapsed: 10.873340873s
... skipping 5 lines ...
Jun 15 03:25:22.199: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Running", Reason="", readiness=true. Elapsed: 23.745997764s
Jun 15 03:25:24.347: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Running", Reason="", readiness=true. Elapsed: 25.89431672s
Jun 15 03:25:26.492: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Running", Reason="", readiness=false. Elapsed: 28.039182075s
Jun 15 03:25:28.637: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Running", Reason="", readiness=false. Elapsed: 30.183575571s
Jun 15 03:25:30.781: INFO: Pod "pod-subpath-test-projected-nbm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.328070428s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:30.781: INFO: Pod "pod-subpath-test-projected-nbm9" satisfied condition "Succeeded or Failed"
Jun 15 03:25:30.928: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-projected-nbm9 container test-container-subpath-projected-nbm9: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:31.229: INFO: Waiting for pod pod-subpath-test-projected-nbm9 to disappear
Jun 15 03:25:31.373: INFO: Pod pod-subpath-test-projected-nbm9 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-projected-nbm9
Jun 15 03:25:31.373: INFO: Deleting pod "pod-subpath-test-projected-nbm9" in namespace "subpath-3811"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with projected pod [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-network] Firewall rule
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:25:31.130: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename firewall-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-fb00733c-257f-4fbb-ab30-f0ff3e5ace1b
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:25:30.472: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f" in namespace "projected-6179" to be "Succeeded or Failed"
Jun 15 03:25:30.616: INFO: Pod "pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f": Phase="Pending", Reason="", readiness=false. Elapsed: 144.054482ms
Jun 15 03:25:32.762: INFO: Pod "pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289288806s
Jun 15 03:25:34.907: INFO: Pod "pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43447287s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:34.907: INFO: Pod "pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f" satisfied condition "Succeeded or Failed"
Jun 15 03:25:35.050: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:35.359: INFO: Waiting for pod pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f to disappear
Jun 15 03:25:35.502: INFO: Pod pod-projected-configmaps-46fdc404-3983-4e64-be6c-c337fa10ed9f no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:188
... skipping 16 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
test/e2e/storage/testsuites/subpath.go:397
Jun 15 03:25:16.981: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 15 03:25:17.273: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5450" in namespace "provisioning-5450" to be "Succeeded or Failed"
Jun 15 03:25:17.418: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 144.912ms
Jun 15 03:25:19.563: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290035378s
Jun 15 03:25:21.708: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435316085s
Jun 15 03:25:23.854: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581274887s
Jun 15 03:25:25.998: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.725536609s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:25.999: INFO: Pod "hostpath-symlink-prep-provisioning-5450" satisfied condition "Succeeded or Failed"
Jun 15 03:25:25.999: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5450" in namespace "provisioning-5450"
Jun 15 03:25:26.147: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5450" to be fully deleted
Jun 15 03:25:26.290: INFO: Creating resource for inline volume
Jun 15 03:25:26.291: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
[1mSTEP[0m: Deleting pod
Jun 15 03:25:26.291: INFO: Deleting pod "pod-subpath-test-inlinevolume-wkpc" in namespace "provisioning-5450"
Jun 15 03:25:26.580: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5450" in namespace "provisioning-5450" to be "Succeeded or Failed"
Jun 15 03:25:26.724: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 143.504935ms
Jun 15 03:25:28.869: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288901911s
Jun 15 03:25:31.014: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433613569s
Jun 15 03:25:33.157: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577163903s
Jun 15 03:25:35.302: INFO: Pod "hostpath-symlink-prep-provisioning-5450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.72167511s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:35.302: INFO: Pod "hostpath-symlink-prep-provisioning-5450" satisfied condition "Succeeded or Failed"
Jun 15 03:25:35.302: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5450" in namespace "provisioning-5450"
Jun 15 03:25:35.450: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5450" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:188
Jun 15 03:25:35.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-5450" for this suite.
... skipping 60 lines ...
[32m• [SLOW TEST:42.469 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should be ready immediately after startupProbe succeeds
[90mtest/e2e/common/node/container_probe.go:411[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:25:35.804: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename replication-controller
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:25:37.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "replication-controller-2227" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:25:31.577: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium
Jun 15 03:25:32.727: INFO: Waiting up to 5m0s for pod "pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8" in namespace "emptydir-5885" to be "Succeeded or Failed"
Jun 15 03:25:32.872: INFO: Pod "pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8": Phase="Pending", Reason="", readiness=false. Elapsed: 144.052049ms
Jun 15 03:25:35.016: INFO: Pod "pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288986519s
Jun 15 03:25:37.161: INFO: Pod "pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43305487s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:37.161: INFO: Pod "pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8" satisfied condition "Succeeded or Failed"
Jun 15 03:25:37.306: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:37.685: INFO: Waiting for pod pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8 to disappear
Jun 15 03:25:37.848: INFO: Pod pod-965f81c9-c3da-45f2-b7fa-b1ddddee1af8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.560 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:38.224: INFO: Only supported for providers [openstack] (not aws)
... skipping 114 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's memory request [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:25:31.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6" in namespace "downward-api-1661" to be "Succeeded or Failed"
Jun 15 03:25:31.885: INFO: Pod "downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6": Phase="Pending", Reason="", readiness=false. Elapsed: 144.146529ms
Jun 15 03:25:34.031: INFO: Pod "downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289207281s
Jun 15 03:25:36.177: INFO: Pod "downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435336202s
Jun 15 03:25:38.321: INFO: Pod "downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579795834s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:38.321: INFO: Pod "downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6" satisfied condition "Succeeded or Failed"
Jun 15 03:25:38.468: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:38.775: INFO: Waiting for pod downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6 to disappear
Jun 15 03:25:38.919: INFO: Pod downwardapi-volume-4503fdda-c408-4b07-bb64-4148a2a798e6 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.623 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's memory request [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:39.224: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:188
... skipping 143 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:25:39.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "custom-resource-definition-6504" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:39.643: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:188
... skipping 106 lines ...
[1mSTEP[0m: Destroying namespace "services-6734" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:760
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":5,"skipped":67,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:40.548: INFO: Only supported for providers [azure] (not aws)
... skipping 90 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
should validate Statefulset Status endpoints [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 13 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:25:41.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-9164" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] ReplicationController
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 96 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl apply
[90mtest/e2e/kubectl/kubectl.go:817[0m
apply set/view last-applied
[90mtest/e2e/kubectl/kubectl.go:852[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:44.085: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:188
... skipping 113 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:380[0m
should contain last line of the log
[90mtest/e2e/kubectl/kubectl.go:624[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":1,"skipped":5,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:44.239: INFO: Only supported for providers [azure] (not aws)
... skipping 106 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:25:37.150: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename resourcequota
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
[32m• [SLOW TEST:13.832 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a service. [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 20 lines ...
[32m• [SLOW TEST:59.140 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
updates the published spec when one version gets renamed [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:53.849: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/framework/framework.go:188
... skipping 52 lines ...
[32m• [SLOW TEST:22.317 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to change the type from NodePort to ExternalName [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:54.793: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
[32m• [SLOW TEST:14.347 seconds][0m
[sig-api-machinery] Generated clientset
[90mtest/e2e/apimachinery/framework.go:23[0m
should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
[90mtest/e2e/apimachinery/generated_clientset.go:105[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":6,"skipped":72,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:25:54.939: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
Jun 15 03:25:32.589: INFO: PersistentVolumeClaim pvc-5q6pr found but phase is Pending instead of Bound.
Jun 15 03:25:34.735: INFO: PersistentVolumeClaim pvc-5q6pr found and phase=Bound (13.028370972s)
Jun 15 03:25:34.735: INFO: Waiting up to 3m0s for PersistentVolume local-bbp7w to have phase Bound
Jun 15 03:25:34.880: INFO: PersistentVolume local-bbp7w found and phase=Bound (145.208912ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-8cv4
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:25:35.318: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8cv4" in namespace "provisioning-5203" to be "Succeeded or Failed"
Jun 15 03:25:35.463: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Pending", Reason="", readiness=false. Elapsed: 145.221121ms
Jun 15 03:25:37.642: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323643191s
Jun 15 03:25:39.790: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472217589s
Jun 15 03:25:41.939: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.621065829s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:41.939: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4" satisfied condition "Succeeded or Failed"
Jun 15 03:25:42.088: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-subpath-test-preprovisionedpv-8cv4 container test-container-subpath-preprovisionedpv-8cv4: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:42.398: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8cv4 to disappear
Jun 15 03:25:42.543: INFO: Pod pod-subpath-test-preprovisionedpv-8cv4 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-8cv4
Jun 15 03:25:42.543: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8cv4" in namespace "provisioning-5203"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-8cv4
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:25:42.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8cv4" in namespace "provisioning-5203" to be "Succeeded or Failed"
Jun 15 03:25:42.982: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Pending", Reason="", readiness=false. Elapsed: 145.162087ms
Jun 15 03:25:45.129: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291518965s
Jun 15 03:25:47.274: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43726666s
Jun 15 03:25:49.421: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583594444s
Jun 15 03:25:51.566: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.729292329s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:51.566: INFO: Pod "pod-subpath-test-preprovisionedpv-8cv4" satisfied condition "Succeeded or Failed"
Jun 15 03:25:51.712: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-subpath-test-preprovisionedpv-8cv4 container test-container-subpath-preprovisionedpv-8cv4: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:52.032: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8cv4 to disappear
Jun 15 03:25:52.177: INFO: Pod pod-subpath-test-preprovisionedpv-8cv4 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-8cv4
Jun 15 03:25:52.177: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8cv4" in namespace "provisioning-5203"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 76 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":15,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 296 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":12,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 8 lines ...
[1mSTEP[0m: Deploying the webhook pod
[1mSTEP[0m: Wait for the deployment to be ready
Jun 15 03:25:57.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 15, 3, 25, 57, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 15, 3, 25, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 15, 3, 25, 57, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 15, 3, 25, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f978cd6d5\" is progressing."}}, CollisionCount:(*int32)(nil)}
[1mSTEP[0m: Deploying the webhook service
[1mSTEP[0m: Verifying the service has paired with the endpoint
Jun 15 03:26:01.101: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
[1mSTEP[0m: create a namespace for the webhook
[1mSTEP[0m: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:188
Jun 15 03:26:02.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "webhook-8409" for this suite.
... skipping 2 lines ...
test/e2e/apimachinery/webhook.go:104
[32m• [SLOW TEST:8.152 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should unconditionally reject operations on fail closed webhook [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":7,"skipped":86,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:03.184: INFO: Only supported for providers [vsphere] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir-link]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":2,"skipped":28,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:00.842: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:26:04.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-4780" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":3,"skipped":28,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:04.926: INFO: Driver hostPathSymlink doesn't support GenericEphemeralVolume -- skipping
... skipping 94 lines ...
[It] should support non-existent path
test/e2e/storage/testsuites/subpath.go:196
Jun 15 03:25:58.187: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 15 03:25:58.187: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-f9n8
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:25:58.336: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-f9n8" in namespace "provisioning-3448" to be "Succeeded or Failed"
Jun 15 03:25:58.482: INFO: Pod "pod-subpath-test-inlinevolume-f9n8": Phase="Pending", Reason="", readiness=false. Elapsed: 145.476354ms
Jun 15 03:26:00.630: INFO: Pod "pod-subpath-test-inlinevolume-f9n8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2939979s
Jun 15 03:26:02.776: INFO: Pod "pod-subpath-test-inlinevolume-f9n8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440200887s
Jun 15 03:26:04.922: INFO: Pod "pod-subpath-test-inlinevolume-f9n8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.586103682s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:04.922: INFO: Pod "pod-subpath-test-inlinevolume-f9n8" satisfied condition "Succeeded or Failed"
Jun 15 03:26:05.070: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-inlinevolume-f9n8 container test-container-volume-inlinevolume-f9n8: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:05.367: INFO: Waiting for pod pod-subpath-test-inlinevolume-f9n8 to disappear
Jun 15 03:26:05.513: INFO: Pod pod-subpath-test-inlinevolume-f9n8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-f9n8
Jun 15 03:26:05.514: INFO: Deleting pod "pod-subpath-test-inlinevolume-f9n8" in namespace "provisioning-3448"
... skipping 12 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":16,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:06.142: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":20,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:06.264: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 125 lines ...
Jun 15 03:25:13.352: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass provisioning-6307lvmkp
[1mSTEP[0m: creating a claim
Jun 15 03:25:13.497: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-kzgq
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 15 03:25:13.934: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kzgq" in namespace "provisioning-6307" to be "Succeeded or Failed"
Jun 15 03:25:14.079: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Pending", Reason="", readiness=false. Elapsed: 144.484854ms
Jun 15 03:25:16.223: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289165802s
Jun 15 03:25:18.368: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43426152s
Jun 15 03:25:20.514: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580196771s
Jun 15 03:25:22.663: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728358328s
Jun 15 03:25:24.809: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.874929629s
... skipping 9 lines ...
Jun 15 03:25:46.281: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Running", Reason="", readiness=true. Elapsed: 32.34663227s
Jun 15 03:25:48.425: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Running", Reason="", readiness=true. Elapsed: 34.491093589s
Jun 15 03:25:50.571: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Running", Reason="", readiness=true. Elapsed: 36.636973359s
Jun 15 03:25:52.717: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Running", Reason="", readiness=true. Elapsed: 38.78258644s
Jun 15 03:25:54.861: INFO: Pod "pod-subpath-test-dynamicpv-kzgq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.926715192s
[1mSTEP[0m: Saw pod success
Jun 15 03:25:54.861: INFO: Pod "pod-subpath-test-dynamicpv-kzgq" satisfied condition "Succeeded or Failed"
Jun 15 03:25:55.008: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-dynamicpv-kzgq container test-container-subpath-dynamicpv-kzgq: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:25:55.305: INFO: Waiting for pod pod-subpath-test-dynamicpv-kzgq to disappear
Jun 15 03:25:55.449: INFO: Pod pod-subpath-test-dynamicpv-kzgq no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-kzgq
Jun 15 03:25:55.449: INFO: Deleting pod "pod-subpath-test-dynamicpv-kzgq" in namespace "provisioning-6307"
... skipping 19 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":3,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:25:55.332: INFO: >>> kubeConfig: /root/.kube/config
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":2,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:07.375: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
Jun 15 03:25:32.852: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-x6tqx] to have phase Bound
Jun 15 03:25:33.004: INFO: PersistentVolumeClaim pvc-x6tqx found and phase=Bound (151.870664ms)
[1mSTEP[0m: Deleting the previously created pod
Jun 15 03:25:43.725: INFO: Deleting pod "pvc-volume-tester-bjvtf" in namespace "csi-mock-volumes-5339"
Jun 15 03:25:43.870: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bjvtf" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 15 03:25:50.307: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"d051f0f5-ec5a-11ec-ae06-ba52ef1b224e","target_path":"/var/lib/kubelet/pods/7db67049-8906-4557-8b6a-d51055f95a72/volumes/kubernetes.io~csi/pvc-3d6660c0-8aea-4f4e-8256-b1f7372a546f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-bjvtf
Jun 15 03:25:50.307: INFO: Deleting pod "pvc-volume-tester-bjvtf" in namespace "csi-mock-volumes-5339"
[1mSTEP[0m: Deleting claim pvc-x6tqx
Jun 15 03:25:50.740: INFO: Waiting up to 2m0s for PersistentVolume pvc-3d6660c0-8aea-4f4e-8256-b1f7372a546f to get deleted
Jun 15 03:25:50.884: INFO: PersistentVolume pvc-3d6660c0-8aea-4f4e-8256-b1f7372a546f was removed
[1mSTEP[0m: Deleting storageclass csi-mock-volumes-5339-scmww7s
... skipping 44 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI workload information using mock driver
[90mtest/e2e/storage/csi_mock_volume.go:467[0m
should not be passed when podInfoOnMount=nil
[90mtest/e2e/storage/csi_mock_volume.go:517[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":4,"skipped":30,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":32,"failed":0}
[BeforeEach] [sig-node] Downward API
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:01.273: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 15 03:26:02.446: INFO: Waiting up to 5m0s for pod "downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06" in namespace "downward-api-5554" to be "Succeeded or Failed"
Jun 15 03:26:02.591: INFO: Pod "downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06": Phase="Pending", Reason="", readiness=false. Elapsed: 145.005809ms
Jun 15 03:26:04.735: INFO: Pod "downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289227365s
Jun 15 03:26:06.880: INFO: Pod "downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434035978s
Jun 15 03:26:09.025: INFO: Pod "downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579076806s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:09.025: INFO: Pod "downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06" satisfied condition "Succeeded or Failed"
Jun 15 03:26:09.169: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:09.468: INFO: Waiting for pod downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06 to disappear
Jun 15 03:26:09.630: INFO: Pod downward-api-e747c7e9-d47d-49ec-9293-0c4c913fee06 no longer exists
[AfterEach] [sig-node] Downward API
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.652 seconds][0m
[sig-node] Downward API
[90mtest/e2e/common/node/framework.go:23[0m
should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:09.946: INFO: Only supported for providers [openstack] (not aws)
... skipping 27 lines ...
[It] should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198
Jun 15 03:25:32.846: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 15 03:25:32.846: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-qdkf
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 15 03:25:32.994: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-qdkf" in namespace "volume-9177" to be "Succeeded or Failed"
Jun 15 03:25:33.137: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 143.78332ms
Jun 15 03:25:35.283: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289273586s
Jun 15 03:25:37.428: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434261029s
Jun 15 03:25:39.573: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579707493s
Jun 15 03:25:41.718: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724191893s
Jun 15 03:25:43.863: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.869541727s
... skipping 7 lines ...
Jun 15 03:26:01.027: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 28.033430329s
Jun 15 03:26:03.173: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 30.178993753s
Jun 15 03:26:05.319: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 32.325483389s
Jun 15 03:26:07.464: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 34.470658292s
Jun 15 03:26:09.633: INFO: Pod "exec-volume-test-inlinevolume-qdkf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.639470986s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:09.633: INFO: Pod "exec-volume-test-inlinevolume-qdkf" satisfied condition "Succeeded or Failed"
Jun 15 03:26:09.777: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod exec-volume-test-inlinevolume-qdkf container exec-container-inlinevolume-qdkf: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:10.080: INFO: Waiting for pod exec-volume-test-inlinevolume-qdkf to disappear
Jun 15 03:26:10.224: INFO: Pod exec-volume-test-inlinevolume-qdkf no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-qdkf
Jun 15 03:26:10.224: INFO: Deleting pod "exec-volume-test-inlinevolume-qdkf" in namespace "volume-9177"
... skipping 94 lines ...
[32m• [SLOW TEST:12.030 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should get a host IP [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:12.583: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:188
... skipping 64 lines ...
[90mtest/e2e/kubectl/portforward.go:476[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:477[0m
should support a client that connects, sends DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:481[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":20,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Jun 15 03:26:01.936: INFO: PersistentVolumeClaim pvc-c4ncf found but phase is Pending instead of Bound.
Jun 15 03:26:04.081: INFO: PersistentVolumeClaim pvc-c4ncf found and phase=Bound (4.43507928s)
Jun 15 03:26:04.081: INFO: Waiting up to 3m0s for PersistentVolume local-h55g2 to have phase Bound
Jun 15 03:26:04.225: INFO: PersistentVolume local-h55g2 found and phase=Bound (144.007527ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4bd8
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:26:04.658: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4bd8" in namespace "provisioning-1600" to be "Succeeded or Failed"
Jun 15 03:26:04.802: INFO: Pod "pod-subpath-test-preprovisionedpv-4bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 143.874955ms
Jun 15 03:26:06.947: INFO: Pod "pod-subpath-test-preprovisionedpv-4bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288828756s
Jun 15 03:26:09.092: INFO: Pod "pod-subpath-test-preprovisionedpv-4bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433503888s
Jun 15 03:26:11.239: INFO: Pod "pod-subpath-test-preprovisionedpv-4bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580348579s
Jun 15 03:26:13.383: INFO: Pod "pod-subpath-test-preprovisionedpv-4bd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.724996794s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:13.384: INFO: Pod "pod-subpath-test-preprovisionedpv-4bd8" satisfied condition "Succeeded or Failed"
Jun 15 03:26:13.527: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-4bd8 container test-container-subpath-preprovisionedpv-4bd8: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:13.827: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4bd8 to disappear
Jun 15 03:26:13.971: INFO: Pod pod-subpath-test-preprovisionedpv-4bd8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4bd8
Jun 15 03:26:13.971: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4bd8" in namespace "provisioning-1600"
... skipping 187 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (block volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:17.421: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/framework/framework.go:188
... skipping 112 lines ...
[36mOnly supported for providers [azure] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1576
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [sig-instrumentation] MetricsGrabber
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:15.961: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename metrics-grabber
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:26:18.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "metrics-grabber-2710" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":4,"skipped":12,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 17 lines ...
Jun 15 03:25:59.870: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:00.020: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:00.764: INFO: Unable to read jessie_udp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:00.908: INFO: Unable to read jessie_tcp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:01.053: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:01.197: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:01.826: INFO: Lookups using dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d failed for: [wheezy_udp@dns-test-service.dns-7072.svc.cluster.local wheezy_tcp@dns-test-service.dns-7072.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local jessie_udp@dns-test-service.dns-7072.svc.cluster.local jessie_tcp@dns-test-service.dns-7072.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local]
Jun 15 03:26:06.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:07.126: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:07.271: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:07.416: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:08.144: INFO: Unable to read jessie_udp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:08.289: INFO: Unable to read jessie_tcp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:08.433: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:08.578: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:09.156: INFO: Lookups using dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d failed for: [wheezy_udp@dns-test-service.dns-7072.svc.cluster.local wheezy_tcp@dns-test-service.dns-7072.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local jessie_udp@dns-test-service.dns-7072.svc.cluster.local jessie_tcp@dns-test-service.dns-7072.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local]
Jun 15 03:26:11.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:12.123: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:12.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:12.427: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:13.157: INFO: Unable to read jessie_udp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:13.303: INFO: Unable to read jessie_tcp@dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:13.450: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:13.594: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local from pod dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d: the server could not find the requested resource (get pods dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d)
Jun 15 03:26:14.177: INFO: Lookups using dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d failed for: [wheezy_udp@dns-test-service.dns-7072.svc.cluster.local wheezy_tcp@dns-test-service.dns-7072.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local jessie_udp@dns-test-service.dns-7072.svc.cluster.local jessie_tcp@dns-test-service.dns-7072.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7072.svc.cluster.local]
Jun 15 03:26:19.162: INFO: DNS probes using dns-7072/dns-test-1999f5c2-acaa-47f6-b9c6-f7563b5da45d succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test service
[1mSTEP[0m: deleting the test headless service
... skipping 6 lines ...
[32m• [SLOW TEST:38.390 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for services [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":6,"skipped":32,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 302 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should verify that all csinodes have volume limits
[90mtest/e2e/storage/testsuites/volumelimits.go:249[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":2,"skipped":2,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 118 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI FSGroupPolicy [LinuxOnly]
[90mtest/e2e/storage/csi_mock_volume.go:1636[0m
should modify fsGroup if fsGroupPolicy=File
[90mtest/e2e/storage/csi_mock_volume.go:1660[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":1,"skipped":53,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:23.812: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 140 lines ...
[32m• [SLOW TEST:23.739 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
should run the lifecycle of a Deployment [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:25.893: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 22 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs
Jun 15 03:26:21.553: INFO: Waiting up to 5m0s for pod "pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d" in namespace "emptydir-9103" to be "Succeeded or Failed"
Jun 15 03:26:21.696: INFO: Pod "pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d": Phase="Pending", Reason="", readiness=false. Elapsed: 143.241742ms
Jun 15 03:26:23.842: INFO: Pod "pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288582486s
Jun 15 03:26:25.986: INFO: Pod "pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432943435s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:25.986: INFO: Pod "pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d" satisfied condition "Succeeded or Failed"
Jun 15 03:26:26.130: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:26.426: INFO: Waiting for pod pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d to disappear
Jun 15 03:26:26.569: INFO: Pod pod-520acbf0-62b6-4b4b-8e5c-b63b5118b34d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.461 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 69 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":5,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:27.434: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 170 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read-only inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:175[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":1,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:30.067: INFO: Only supported for providers [vsphere] (not aws)
... skipping 26 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:26:23.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9" in namespace "projected-7127" to be "Succeeded or Failed"
Jun 15 03:26:24.071: INFO: Pod "downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9": Phase="Pending", Reason="", readiness=false. Elapsed: 145.581209ms
Jun 15 03:26:26.217: INFO: Pod "downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29145136s
Jun 15 03:26:28.367: INFO: Pod "downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441323179s
Jun 15 03:26:30.513: INFO: Pod "downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.587494389s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:30.513: INFO: Pod "downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9" satisfied condition "Succeeded or Failed"
Jun 15 03:26:30.657: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:30.962: INFO: Waiting for pod downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9 to disappear
Jun 15 03:26:31.107: INFO: Pod downwardapi-volume-8a30d254-854d-408b-861b-76ca82f641c9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.676 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's memory limit [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:31.477: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 75 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:26:32.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "disruption-9479" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:32.860: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:188
... skipping 69 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-8a72de08-5cf9-4400-acd6-63c6a27fd5df
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:26:20.193: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8" in namespace "projected-4725" to be "Succeeded or Failed"
Jun 15 03:26:20.338: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 144.440417ms
Jun 15 03:26:22.499: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306127078s
Jun 15 03:26:24.645: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451671456s
Jun 15 03:26:26.790: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596613152s
Jun 15 03:26:28.936: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743075882s
Jun 15 03:26:31.081: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.887976555s
Jun 15 03:26:33.227: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.033221786s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:33.227: INFO: Pod "pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8" satisfied condition "Succeeded or Failed"
Jun 15 03:26:33.371: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:33.670: INFO: Waiting for pod pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8 to disappear
Jun 15 03:26:33.814: INFO: Pod pod-projected-configmaps-0588f13c-5acc-4ee7-8103-032a387b39e8 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:15.208 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 30 lines ...
[32m• [SLOW TEST:11.171 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should mutate custom resource with different stored version [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":2,"skipped":57,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:33.843: INFO: >>> kubeConfig: /root/.kube/config
... skipping 51 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
should be restarted with a local redirect http liveness probe
[90mtest/e2e/common/node/container_probe.go:285[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:35.164: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:188
... skipping 336 lines ...
[32m• [SLOW TEST:30.217 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should serve a basic endpoint from pods [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:36.417: INFO: Driver local doesn't support ext3 -- skipping
... skipping 82 lines ...
Jun 15 03:26:07.404: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass volume-7880shhhh
[1mSTEP[0m: creating a claim
Jun 15 03:26:07.550: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-rzn6
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 15 03:26:07.989: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-rzn6" in namespace "volume-7880" to be "Succeeded or Failed"
Jun 15 03:26:08.132: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 143.782733ms
Jun 15 03:26:10.277: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288636137s
Jun 15 03:26:12.427: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438633384s
Jun 15 03:26:14.572: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583614871s
Jun 15 03:26:16.726: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737483608s
Jun 15 03:26:18.872: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.882977722s
Jun 15 03:26:21.016: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.027716607s
Jun 15 03:26:23.161: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.172183386s
Jun 15 03:26:25.306: INFO: Pod "exec-volume-test-dynamicpv-rzn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.317430787s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:25.306: INFO: Pod "exec-volume-test-dynamicpv-rzn6" satisfied condition "Succeeded or Failed"
Jun 15 03:26:25.450: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod exec-volume-test-dynamicpv-rzn6 container exec-container-dynamicpv-rzn6: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:25.747: INFO: Waiting for pod exec-volume-test-dynamicpv-rzn6 to disappear
Jun 15 03:26:25.890: INFO: Pod exec-volume-test-dynamicpv-rzn6 no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-rzn6
Jun 15 03:26:25.890: INFO: Deleting pod "exec-volume-test-dynamicpv-rzn6" in namespace "volume-7880"
... skipping 39 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:26:38.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "clientset-1912" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":5,"skipped":29,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:38.778: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 123 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:26:39.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "deployment-3479" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:40.296: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 23 lines ...
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:39.619: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename topology
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
test/e2e/storage/testsuites/topology.go:194
Jun 15 03:26:40.772: INFO: found topology map[topology.kubernetes.io/zone:sa-east-1a]
Jun 15 03:26:40.772: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jun 15 03:26:40.772: INFO: Not enough topologies in cluster -- skipping
[1mSTEP[0m: Deleting pvc
[1mSTEP[0m: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: aws]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [It][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mNot enough topologies in cluster -- skipping[0m
test/e2e/storage/testsuites/topology.go:201
[90m------------------------------[0m
... skipping 102 lines ...
[32m• [SLOW TEST:23.630 seconds][0m
[sig-apps] Deployment
[90mtest/e2e/apps/framework.go:23[0m
should validate Deployment Status endpoints [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":45,"failed":0}
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:37.357: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
test/e2e/node/security_context.go:185
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 15 03:26:38.512: INFO: Waiting up to 5m0s for pod "security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910" in namespace "security-context-1869" to be "Succeeded or Failed"
Jun 15 03:26:38.656: INFO: Pod "security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910": Phase="Pending", Reason="", readiness=false. Elapsed: 144.150786ms
Jun 15 03:26:40.801: INFO: Pod "security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288363838s
Jun 15 03:26:42.946: INFO: Pod "security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433384287s
Jun 15 03:26:45.090: INFO: Pod "security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577763701s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:45.090: INFO: Pod "security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910" satisfied condition "Succeeded or Failed"
Jun 15 03:26:45.234: INFO: Trying to get logs from node i-05fe3937684c9d649 pod security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:45.530: INFO: Waiting for pod security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910 to disappear
Jun 15 03:26:45.680: INFO: Pod security-context-0d7902cd-ed49-4c0f-9e73-22bdb1dba910 no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.618 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support seccomp default which is unconfined [LinuxOnly]
[90mtest/e2e/node/security_context.go:185[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":4,"skipped":45,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:46.012: INFO: Only supported for providers [azure] (not aws)
... skipping 243 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: vsphere]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [vsphere] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1438
[90m------------------------------[0m
... skipping 7 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-fd25c351-f7ba-4468-a291-4d9956b4fcf3
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:26:42.533: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49" in namespace "configmap-2955" to be "Succeeded or Failed"
Jun 15 03:26:42.676: INFO: Pod "pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49": Phase="Pending", Reason="", readiness=false. Elapsed: 143.610399ms
Jun 15 03:26:44.821: INFO: Pod "pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288394143s
Jun 15 03:26:46.965: INFO: Pod "pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432146177s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:46.965: INFO: Pod "pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49" satisfied condition "Succeeded or Failed"
Jun 15 03:26:47.109: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:47.402: INFO: Waiting for pod pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49 to disappear
Jun 15 03:26:47.561: INFO: Pod pod-configmaps-7ea77207-4a31-47de-822e-16cb43974f49 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:188
... skipping 14 lines ...
[1mSTEP[0m: Building a namespace api object, basename svcaccounts
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should mount projected service account token [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test service account token:
Jun 15 03:26:34.053: INFO: Waiting up to 5m0s for pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931" in namespace "svcaccounts-3308" to be "Succeeded or Failed"
Jun 15 03:26:34.197: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931": Phase="Pending", Reason="", readiness=false. Elapsed: 144.65953ms
Jun 15 03:26:36.342: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28910899s
Jun 15 03:26:38.490: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436948048s
Jun 15 03:26:40.636: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58290455s
Jun 15 03:26:42.781: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728371922s
Jun 15 03:26:44.926: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931": Phase="Pending", Reason="", readiness=false. Elapsed: 10.873343551s
Jun 15 03:26:47.071: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.018380097s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:47.071: INFO: Pod "test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931" satisfied condition "Succeeded or Failed"
Jun 15 03:26:47.216: INFO: Trying to get logs from node i-05fe3937684c9d649 pod test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:47.520: INFO: Waiting for pod test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931 to disappear
Jun 15 03:26:47.665: INFO: Pod test-pod-b9cc7267-db50-431b-a79c-af44f7d0c931 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:15.068 seconds][0m
[sig-auth] ServiceAccounts
[90mtest/e2e/auth/framework.go:23[0m
should mount projected service account token [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 58 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl label
[90mtest/e2e/kubectl/kubectl.go:1332[0m
should update the label on a resource [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:10.668: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename nettest
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 210 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support multiple inline ephemeral volumes
[90mtest/e2e/storage/testsuites/ephemeral.go:254[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":3,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:55.289: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:188
... skipping 189 lines ...
Jun 15 03:26:20.942: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass provisioning-14902xm2m
[1mSTEP[0m: creating a claim
Jun 15 03:26:21.087: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-srpj
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:26:21.526: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-srpj" in namespace "provisioning-1490" to be "Succeeded or Failed"
Jun 15 03:26:21.671: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 144.506331ms
Jun 15 03:26:23.816: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289997897s
Jun 15 03:26:25.963: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437210866s
Jun 15 03:26:28.108: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581537317s
Jun 15 03:26:30.253: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726988341s
Jun 15 03:26:32.408: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.881539462s
Jun 15 03:26:34.553: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.027092921s
Jun 15 03:26:36.710: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.183836058s
Jun 15 03:26:38.858: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 17.331500046s
Jun 15 03:26:41.003: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 19.476753427s
Jun 15 03:26:43.148: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Pending", Reason="", readiness=false. Elapsed: 21.622416021s
Jun 15 03:26:45.295: INFO: Pod "pod-subpath-test-dynamicpv-srpj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.76923715s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:45.295: INFO: Pod "pod-subpath-test-dynamicpv-srpj" satisfied condition "Succeeded or Failed"
Jun 15 03:26:45.441: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-subpath-test-dynamicpv-srpj container test-container-volume-dynamicpv-srpj: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:45.735: INFO: Waiting for pod pod-subpath-test-dynamicpv-srpj to disappear
Jun 15 03:26:45.879: INFO: Pod pod-subpath-test-dynamicpv-srpj no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-srpj
Jun 15 03:26:45.879: INFO: Deleting pod "pod-subpath-test-dynamicpv-srpj" in namespace "provisioning-1490"
... skipping 19 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":33,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:51.083: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium
Jun 15 03:26:52.273: INFO: Waiting up to 5m0s for pod "pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048" in namespace "emptydir-1244" to be "Succeeded or Failed"
Jun 15 03:26:52.417: INFO: Pod "pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048": Phase="Pending", Reason="", readiness=false. Elapsed: 144.13934ms
Jun 15 03:26:54.563: INFO: Pod "pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290381804s
Jun 15 03:26:56.709: INFO: Pod "pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.436252103s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:56.709: INFO: Pod "pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048" satisfied condition "Succeeded or Failed"
Jun 15 03:26:56.853: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:57.158: INFO: Waiting for pod pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048 to disappear
Jun 15 03:26:57.301: INFO: Pod pod-0e6594c8-d8b3-4d77-899d-a0c4ce339048 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.508 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:57.636: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 66 lines ...
[32m• [SLOW TEST:29.785 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should create endpoints for unready pods
[90mtest/e2e/network/service.go:1655[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":7,"skipped":57,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-instrumentation] Events API
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 14 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:26:59.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "events-2582" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:26:59.858: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 52 lines ...
Jun 15 03:26:46.581: INFO: PersistentVolumeClaim pvc-nr228 found but phase is Pending instead of Bound.
Jun 15 03:26:48.724: INFO: PersistentVolumeClaim pvc-nr228 found and phase=Bound (6.575055182s)
Jun 15 03:26:48.725: INFO: Waiting up to 3m0s for PersistentVolume local-r7556 to have phase Bound
Jun 15 03:26:48.868: INFO: PersistentVolume local-r7556 found and phase=Bound (143.42062ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-49wp
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:26:49.302: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-49wp" in namespace "provisioning-2238" to be "Succeeded or Failed"
Jun 15 03:26:49.446: INFO: Pod "pod-subpath-test-preprovisionedpv-49wp": Phase="Pending", Reason="", readiness=false. Elapsed: 143.559751ms
Jun 15 03:26:51.590: INFO: Pod "pod-subpath-test-preprovisionedpv-49wp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288180664s
Jun 15 03:26:53.735: INFO: Pod "pod-subpath-test-preprovisionedpv-49wp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433179115s
Jun 15 03:26:55.880: INFO: Pod "pod-subpath-test-preprovisionedpv-49wp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578064245s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:55.880: INFO: Pod "pod-subpath-test-preprovisionedpv-49wp" satisfied condition "Succeeded or Failed"
Jun 15 03:26:56.024: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-49wp container test-container-subpath-preprovisionedpv-49wp: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:56.328: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-49wp to disappear
Jun 15 03:26:56.472: INFO: Pod pod-subpath-test-preprovisionedpv-49wp no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-49wp
Jun 15 03:26:56.472: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-49wp" in namespace "provisioning-2238"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":37,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:01.376: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 100 lines ...
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[36mDriver local doesn't support ext3 -- skipping[0m
test/e2e/storage/framework/testsuite.go:121
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":3,"skipped":66,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:48.568: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_configmap.go:77
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-72ef8ed9-cc7a-46a0-9c49-384f57c9df44
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:26:49.865: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b" in namespace "projected-9027" to be "Succeeded or Failed"
Jun 15 03:26:50.009: INFO: Pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 143.237383ms
Jun 15 03:26:52.170: INFO: Pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304844712s
Jun 15 03:26:54.315: INFO: Pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449247327s
Jun 15 03:26:56.460: INFO: Pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.59421865s
Jun 15 03:26:58.605: INFO: Pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73991389s
Jun 15 03:27:00.749: INFO: Pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.883766307s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:00.749: INFO: Pod "pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b" satisfied condition "Succeeded or Failed"
Jun 15 03:27:00.893: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:01.199: INFO: Waiting for pod pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b to disappear
Jun 15 03:27:01.342: INFO: Pod pod-projected-configmaps-69dc42ea-9bc8-4390-bc67-69986024c83b no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:13.064 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/projected_configmap.go:77[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":66,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:01.660: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 82 lines ...
Jun 15 03:27:00.146: INFO: Creating a PV followed by a PVC
Jun 15 03:27:00.434: INFO: Waiting for PV local-pvdtwmb to bind to PVC pvc-csft4
Jun 15 03:27:00.434: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-csft4] to have phase Bound
Jun 15 03:27:00.582: INFO: PersistentVolumeClaim pvc-csft4 found and phase=Bound (148.270103ms)
Jun 15 03:27:00.583: INFO: Waiting up to 3m0s for PersistentVolume local-pvdtwmb to have phase Bound
Jun 15 03:27:00.726: INFO: PersistentVolume local-pvdtwmb found and phase=Bound (143.59847ms)
[It] should fail scheduling due to different NodeSelector
test/e2e/storage/persistent_volumes-local.go:381
[1mSTEP[0m: local-volume-type: dir
Jun 15 03:27:01.163: INFO: Waiting up to 5m0s for pod "pod-081e5902-d11d-46df-a393-2a903ea13dc6" in namespace "persistent-local-volumes-test-7953" to be "Unschedulable"
Jun 15 03:27:01.308: INFO: Pod "pod-081e5902-d11d-46df-a393-2a903ea13dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 144.342266ms
Jun 15 03:27:01.308: INFO: Pod "pod-081e5902-d11d-46df-a393-2a903ea13dc6" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 14 lines ...
[32m• [SLOW TEST:7.529 seconds][0m
[sig-storage] PersistentVolumes-local
[90mtest/e2e/storage/utils/framework.go:23[0m
Pod with node different from PV's NodeAffinity
[90mtest/e2e/storage/persistent_volumes-local.go:349[0m
should fail scheduling due to different NodeSelector
[90mtest/e2e/storage/persistent_volumes-local.go:381[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:01.679: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
[1mSTEP[0m: Destroying namespace "services-3265" for this suite.
[AfterEach] [sig-network] Services
test/e2e/network/service.go:760
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":5,"skipped":72,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 67 lines ...
[90mtest/e2e/common/network/framework.go:23[0m
Granular Checks: Pods
[90mtest/e2e/common/network/networking.go:32[0m
should function for intra-pod communication: udp [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:31.551: INFO: >>> kubeConfig: /root/.kube/config
... skipping 7 lines ...
Jun 15 03:26:32.567: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi}
[1mSTEP[0m: creating a StorageClass volume-expand-85952cg6v
[1mSTEP[0m: creating a claim
Jun 15 03:26:32.711: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Expanding non-expandable pvc
Jun 15 03:26:33.002: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI}
Jun 15 03:26:33.313: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:35.603: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:37.603: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:39.605: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:41.607: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:43.603: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:45.607: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:47.604: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:49.605: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:51.603: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:53.609: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:55.602: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:57.602: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:26:59.603: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:27:01.608: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:27:03.606: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-85952cg6v",
... // 3 identical fields
}
Jun 15 03:27:03.895: INFO: Error updating pvc aws49ck4: PersistentVolumeClaim "aws49ck4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 24 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (default fs)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not allow expansion of pvcs without AllowVolumeExpansion property
[90mtest/e2e/storage/testsuites/volume_expand.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:04.635: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/framework/framework.go:188
... skipping 69 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: hostPath]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver hostPath doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 91 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":24,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:05.574: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
[36mOnly supported for providers [azure] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:2077
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":4,"skipped":60,"failed":0}
[BeforeEach] [sig-storage] PV Protection
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:03.017: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pv-protection
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
Jun 15 03:27:05.761: INFO: AfterEach: Cleaning up test resources.
Jun 15 03:27:05.761: INFO: Deleting PersistentVolumeClaim "pvc-2xhqz"
Jun 15 03:27:05.907: INFO: Deleting PersistentVolume "hostpath-nfln4"
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":5,"skipped":60,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:06.069: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 146 lines ...
[32m• [SLOW TEST:21.409 seconds][0m
[sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers]
[90mtest/e2e/common/node/framework.go:23[0m
will start an ephemeral container in an existing pod
[90mtest/e2e/common/node/ephemeral_containers.go:44[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod","total":-1,"completed":5,"skipped":80,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] CSI mock volume
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 104 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI online volume expansion
[90mtest/e2e/storage/csi_mock_volume.go:750[0m
should expand volume without restarting pod if attach=off, nodeExpansion=on
[90mtest/e2e/storage/csi_mock_volume.go:765[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":2,"skipped":29,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:09.861: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 62 lines ...
Jun 15 03:26:47.455: INFO: PersistentVolumeClaim pvc-jjhvb found but phase is Pending instead of Bound.
Jun 15 03:26:49.600: INFO: PersistentVolumeClaim pvc-jjhvb found and phase=Bound (2.290488066s)
Jun 15 03:26:49.600: INFO: Waiting up to 3m0s for PersistentVolume local-hdpwz to have phase Bound
Jun 15 03:26:49.746: INFO: PersistentVolume local-hdpwz found and phase=Bound (145.47229ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-b7wr
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 15 03:26:50.184: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-b7wr" in namespace "volume-3092" to be "Succeeded or Failed"
Jun 15 03:26:50.332: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 147.872108ms
Jun 15 03:26:52.478: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294584812s
Jun 15 03:26:54.626: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4419361s
Jun 15 03:26:56.772: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588336658s
Jun 15 03:26:58.919: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735366646s
Jun 15 03:27:01.065: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.881403643s
Jun 15 03:27:03.213: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 13.028825423s
Jun 15 03:27:05.359: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Pending", Reason="", readiness=false. Elapsed: 15.175421195s
Jun 15 03:27:07.505: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.321324726s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:07.505: INFO: Pod "exec-volume-test-preprovisionedpv-b7wr" satisfied condition "Succeeded or Failed"
Jun 15 03:27:07.650: INFO: Trying to get logs from node i-05fe3937684c9d649 pod exec-volume-test-preprovisionedpv-b7wr container exec-container-preprovisionedpv-b7wr: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:07.951: INFO: Waiting for pod exec-volume-test-preprovisionedpv-b7wr to disappear
Jun 15 03:27:08.096: INFO: Pod exec-volume-test-preprovisionedpv-b7wr no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-b7wr
Jun 15 03:27:08.096: INFO: Deleting pod "exec-volume-test-preprovisionedpv-b7wr" in namespace "volume-3092"
... skipping 19 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":34,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:09.992: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
[1mSTEP[0m: Building a namespace api object, basename svcaccounts
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/auth/service_accounts.go:325
[1mSTEP[0m: Creating a pod to test service account token:
Jun 15 03:26:39.964: INFO: Waiting up to 5m0s for pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" in namespace "svcaccounts-4270" to be "Succeeded or Failed"
Jun 15 03:26:40.109: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 145.603527ms
Jun 15 03:26:42.255: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291105804s
Jun 15 03:26:44.401: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437156627s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:44.401: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" satisfied condition "Succeeded or Failed"
Jun 15 03:26:44.546: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:44.841: INFO: Waiting for pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc to disappear
Jun 15 03:26:44.985: INFO: Pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc no longer exists
[1mSTEP[0m: Creating a pod to test service account token:
Jun 15 03:26:45.131: INFO: Waiting up to 5m0s for pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" in namespace "svcaccounts-4270" to be "Succeeded or Failed"
Jun 15 03:26:45.276: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 145.246513ms
Jun 15 03:26:47.426: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294890021s
Jun 15 03:26:49.572: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440884736s
Jun 15 03:26:51.719: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588051029s
Jun 15 03:26:53.866: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734869739s
Jun 15 03:26:56.013: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.881653808s
Jun 15 03:26:58.158: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.027558904s
[1mSTEP[0m: Saw pod success
Jun 15 03:26:58.159: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" satisfied condition "Succeeded or Failed"
Jun 15 03:26:58.303: INFO: Trying to get logs from node i-05fe3937684c9d649 pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:26:58.601: INFO: Waiting for pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc to disappear
Jun 15 03:26:58.746: INFO: Pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc no longer exists
[1mSTEP[0m: Creating a pod to test service account token:
Jun 15 03:26:58.894: INFO: Waiting up to 5m0s for pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" in namespace "svcaccounts-4270" to be "Succeeded or Failed"
Jun 15 03:26:59.039: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 144.820725ms
Jun 15 03:27:01.194: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300019666s
Jun 15 03:27:03.340: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.446224056s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:03.340: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" satisfied condition "Succeeded or Failed"
Jun 15 03:27:03.486: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:03.780: INFO: Waiting for pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc to disappear
Jun 15 03:27:03.927: INFO: Pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc no longer exists
[1mSTEP[0m: Creating a pod to test service account token:
Jun 15 03:27:04.074: INFO: Waiting up to 5m0s for pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" in namespace "svcaccounts-4270" to be "Succeeded or Failed"
Jun 15 03:27:04.218: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 144.422404ms
Jun 15 03:27:06.368: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294088079s
Jun 15 03:27:08.519: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445723399s
Jun 15 03:27:10.667: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.593531093s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:10.667: INFO: Pod "test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc" satisfied condition "Succeeded or Failed"
Jun 15 03:27:10.812: INFO: Trying to get logs from node i-05fe3937684c9d649 pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:11.110: INFO: Waiting for pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc to disappear
Jun 15 03:27:11.255: INFO: Pod test-pod-1674f48e-2f5e-4d61-88b0-609fac55cfbc no longer exists
[AfterEach] [sig-auth] ServiceAccounts
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:32.766 seconds][0m
[sig-auth] ServiceAccounts
[90mtest/e2e/auth/framework.go:23[0m
should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/auth/service_accounts.go:325[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":32,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:11.590: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
[36mDriver hostPath doesn't support PreprovisionedPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":8,"skipped":34,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:01.782: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 47 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:380[0m
should support exec through kubectl proxy
[90mtest/e2e/kubectl/kubectl.go:474[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":9,"skipped":34,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:18.869: INFO: Only supported for providers [azure] (not aws)
... skipping 52 lines ...
Jun 15 03:27:13.232: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Jun 15 03:27:13.232: INFO: Running '/logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kubectl --server=https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3933 describe pod agnhost-primary-gl44r'
Jun 15 03:27:14.031: INFO: stderr: ""
Jun 15 03:27:14.031: INFO: stdout: "Name: agnhost-primary-gl44r\nNamespace: kubectl-3933\nPriority: 0\nNode: i-05fe3937684c9d649/172.20.46.138\nStart Time: Wed, 15 Jun 2022 03:27:06 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 172.20.49.255\nIPs:\n IP: 172.20.49.255\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://ad1f361dceb62808fdcbf54dbe1b22ee27e56266d61a5bb415a1d6ce4fa20e63\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.36\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:f5241226198f5a54d22540acf2b3933ea0f49458f90c51fc75833d0c428687b8\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 15 Jun 2022 03:27:07 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-522mk (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-522mk:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-3933/agnhost-primary-gl44r to i-05fe3937684c9d649\n Normal Pulled 7s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.36\" already present on machine\n Normal Created 7s kubelet Created container agnhost-primary\n Normal Started 7s kubelet Started container agnhost-primary\n"
Jun 15 03:27:14.031: INFO: Running '/logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kubectl --server=https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3933 describe rc agnhost-primary'
Jun 15 03:27:14.986: INFO: stderr: ""
Jun 15 03:27:14.986: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3933\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.36\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-primary-gl44r\n"
Jun 15 03:27:14.987: INFO: Running '/logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kubectl --server=https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3933 describe service agnhost-primary'
Jun 15 03:27:15.925: INFO: stderr: ""
Jun 15 03:27:15.925: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3933\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 172.20.22.55\nIPs: 172.20.22.55\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.20.49.255:6379\nSession Affinity: None\nEvents: <none>\n"
Jun 15 03:27:16.070: INFO: Running '/logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kubectl --server=https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3933 describe node i-020fc75861952cd2c'
Jun 15 03:27:17.763: INFO: stderr: ""
Jun 15 03:27:17.763: INFO: stdout: "Name: i-020fc75861952cd2c\nRoles: control-plane\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=c5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=sa-east-1\n failure-domain.beta.kubernetes.io/zone=sa-east-1a\n kops.k8s.io/instancegroup=master-sa-east-1a\n kops.k8s.io/kops-controller-pki=\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=i-020fc75861952cd2c\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node.kubernetes.io/exclude-from-external-load-balancers=\n node.kubernetes.io/instance-type=c5.large\n topology.ebs.csi.aws.com/zone=sa-east-1a\n topology.kubernetes.io/region=sa-east-1\n topology.kubernetes.io/zone=sa-east-1a\nAnnotations: alpha.kubernetes.io/provided-node-ip: 172.20.62.59\n csi.volume.kubernetes.io/nodeid: {\"ebs.csi.aws.com\":\"i-020fc75861952cd2c\"}\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 15 Jun 2022 03:19:40 +0000\nTaints: node-role.kubernetes.io/control-plane:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: i-020fc75861952cd2c\n AcquireTime: <unset>\n RenewTime: Wed, 15 Jun 2022 03:27:14 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 15 Jun 2022 03:26:28 +0000 Wed, 15 Jun 2022 03:19:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 15 Jun 2022 03:26:28 +0000 Wed, 15 Jun 2022 03:19:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 15 Jun 2022 03:26:28 +0000 Wed, 15 Jun 2022 03:19:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 15 Jun 2022 03:26:28 +0000 Wed, 15 Jun 2022 03:21:01 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 172.20.62.59\n ExternalIP: 177.71.173.211\n InternalDNS: i-020fc75861952cd2c.sa-east-1.compute.internal\n Hostname: i-020fc75861952cd2c.sa-east-1.compute.internal\n ExternalDNS: ec2-177-71-173-211.sa-east-1.compute.amazonaws.com\nCapacity:\n cpu: 2\n ephemeral-storage: 48600704Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 3774388Ki\n pods: 29\nAllocatable:\n cpu: 2\n ephemeral-storage: 44790408733\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 3671988Ki\n pods: 29\nSystem Info:\n Machine ID: ec2df104e072029240ce1b60e49a1eb5\n System UUID: ec2df104-e072-0292-40ce-1b60e49a1eb5\n Boot ID: 47709416-60bc-49b1-b3b8-1db9b1b4078a\n Kernel Version: 5.13.0-1029-aws\n OS Image: Ubuntu 20.04.4 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.6\n Kubelet Version: v1.24.1\n Kube-Proxy Version: v1.24.1\nPodCIDR: 172.20.128.0/24\nPodCIDRs: 172.20.128.0/24\nProviderID: aws:///sa-east-1a/i-020fc75861952cd2c\nNon-terminated Pods: (11 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system aws-cloud-controller-manager-ssh6g 200m (10%) 0 (0%) 0 (0%) 0 (0%) 7m\n kube-system aws-node-f9lsl 10m (0%) 0 (0%) 0 (0%) 0 (0%) 7m\n kube-system dns-controller-5cfbf8d7f8-mx58q 50m (2%) 0 (0%) 50Mi (1%) 0 (0%) 6m59s\n kube-system ebs-csi-node-znvwx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m\n kube-system etcd-manager-events-i-020fc75861952cd2c 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 6m48s\n kube-system etcd-manager-main-i-020fc75861952cd2c 200m (10%) 0 (0%) 100Mi (2%) 0 (0%) 6m52s\n kube-system kops-controller-b5wks 50m (2%) 0 (0%) 50Mi (1%) 0 (0%) 7m\n kube-system kube-apiserver-i-020fc75861952cd2c 150m (7%) 0 (0%) 0 (0%) 0 (0%) 6m21s\n kube-system kube-controller-manager-i-020fc75861952cd2c 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m50s\n kube-system kube-proxy-i-020fc75861952cd2c 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6m53s\n kube-system kube-scheduler-i-020fc75861952cd2c 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m42s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1060m (53%) 0 (0%)\n memory 300Mi (8%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 7m27s kube-proxy \n Normal NodeHasSufficientMemory 8m30s (x8 over 8m30s) kubelet Node i-020fc75861952cd2c status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 8m30s (x7 over 8m30s) kubelet Node i-020fc75861952cd2c status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 8m30s (x7 over 8m30s) kubelet Node i-020fc75861952cd2c status is now: NodeHasSufficientPID\n Normal RegisteredNode 7m node-controller Node i-020fc75861952cd2c event: Registered Node i-020fc75861952cd2c in Controller\n Normal Synced 6m39s cloud-node-controller Node synced successfully\n"
... skipping 11 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Kubectl describe
[90mtest/e2e/kubectl/kubectl.go:1110[0m
should check if kubectl describe prints relevant information for rc and pods [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:19.042: INFO: Only supported for providers [openstack] (not aws)
... skipping 65 lines ...
[32m• [SLOW TEST:7.805 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
optional updates should be reflected in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 142 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":102,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:47.861: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 66 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":102,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 117 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":4,"skipped":42,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 6 lines ...
[It] should support non-existent path
test/e2e/storage/testsuites/subpath.go:196
Jun 15 03:27:19.914: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 15 03:27:20.063: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-7pd7
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:27:20.218: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7pd7" in namespace "provisioning-2845" to be "Succeeded or Failed"
Jun 15 03:27:20.362: INFO: Pod "pod-subpath-test-inlinevolume-7pd7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.930125ms
Jun 15 03:27:22.510: INFO: Pod "pod-subpath-test-inlinevolume-7pd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291716583s
Jun 15 03:27:24.655: INFO: Pod "pod-subpath-test-inlinevolume-7pd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437282914s
Jun 15 03:27:26.802: INFO: Pod "pod-subpath-test-inlinevolume-7pd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.58387112s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:26.802: INFO: Pod "pod-subpath-test-inlinevolume-7pd7" satisfied condition "Succeeded or Failed"
Jun 15 03:27:26.946: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-inlinevolume-7pd7 container test-container-volume-inlinevolume-7pd7: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:27.241: INFO: Waiting for pod pod-subpath-test-inlinevolume-7pd7 to disappear
Jun 15 03:27:27.384: INFO: Pod pod-subpath-test-inlinevolume-7pd7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-7pd7
Jun 15 03:27:27.385: INFO: Deleting pod "pod-subpath-test-inlinevolume-7pd7" in namespace "provisioning-2845"
... skipping 23 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name configmap-test-volume-1c1dccd2-81e8-4f2b-87d3-5cb9ca64d82a
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:27:23.102: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37" in namespace "configmap-9749" to be "Succeeded or Failed"
Jun 15 03:27:23.245: INFO: Pod "pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37": Phase="Pending", Reason="", readiness=false. Elapsed: 143.153339ms
Jun 15 03:27:25.390: INFO: Pod "pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37": Phase="Running", Reason="", readiness=false. Elapsed: 2.287867823s
Jun 15 03:27:27.539: INFO: Pod "pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437563825s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:27.539: INFO: Pod "pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37" satisfied condition "Succeeded or Failed"
Jun 15 03:27:27.682: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:27.978: INFO: Waiting for pod pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37 to disappear
Jun 15 03:27:28.122: INFO: Pod pod-configmaps-dd7cdd54-6d62-4a9e-8c19-befd0281ea37 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.605 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":105,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:28.429: INFO: Only supported for providers [vsphere] (not aws)
... skipping 12 lines ...
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[36mOnly supported for providers [vsphere] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1438
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:20.728: INFO: >>> kubeConfig: /root/.kube/config
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":29,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:28.542: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
[36mDriver csi-hostpath doesn't support ext3 -- skipping[0m
test/e2e/storage/framework/testsuite.go:121
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":40,"failed":0}
[BeforeEach] [sig-node] ConfigMap
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:27.976: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename configmap
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:27:29.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "configmap-9634" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":11,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:29.760: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 65 lines ...
Jun 15 03:27:16.986: INFO: PersistentVolumeClaim pvc-jlhf8 found but phase is Pending instead of Bound.
Jun 15 03:27:19.137: INFO: PersistentVolumeClaim pvc-jlhf8 found and phase=Bound (4.439405636s)
Jun 15 03:27:19.137: INFO: Waiting up to 3m0s for PersistentVolume local-kn6g6 to have phase Bound
Jun 15 03:27:19.280: INFO: PersistentVolume local-kn6g6 found and phase=Bound (143.704396ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-x6g2
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:27:19.728: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-x6g2" in namespace "provisioning-6210" to be "Succeeded or Failed"
Jun 15 03:27:19.871: INFO: Pod "pod-subpath-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 143.437235ms
Jun 15 03:27:22.016: INFO: Pod "pod-subpath-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288581302s
Jun 15 03:27:24.183: INFO: Pod "pod-subpath-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455146504s
Jun 15 03:27:26.327: INFO: Pod "pod-subpath-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599681007s
Jun 15 03:27:28.473: INFO: Pod "pod-subpath-test-preprovisionedpv-x6g2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.74489857s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:28.473: INFO: Pod "pod-subpath-test-preprovisionedpv-x6g2" satisfied condition "Succeeded or Failed"
Jun 15 03:27:28.619: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-preprovisionedpv-x6g2 container test-container-subpath-preprovisionedpv-x6g2: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:28.915: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-x6g2 to disappear
Jun 15 03:27:29.059: INFO: Pod pod-subpath-test-preprovisionedpv-x6g2 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-x6g2
Jun 15 03:27:29.059: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-x6g2" in namespace "provisioning-6210"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":33,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:31.053: INFO: Only supported for providers [azure] (not aws)
... skipping 14 lines ...
[36mOnly supported for providers [azure] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1576
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:07.197: INFO: >>> kubeConfig: /root/.kube/config
... skipping 66 lines ...
Jun 15 03:26:34.932: INFO: PersistentVolumeClaim csi-hostpathld6pk found but phase is Pending instead of Bound.
Jun 15 03:26:37.081: INFO: PersistentVolumeClaim csi-hostpathld6pk found but phase is Pending instead of Bound.
Jun 15 03:26:39.227: INFO: PersistentVolumeClaim csi-hostpathld6pk found but phase is Pending instead of Bound.
Jun 15 03:26:41.376: INFO: PersistentVolumeClaim csi-hostpathld6pk found and phase=Bound (25.897668524s)
[1mSTEP[0m: Expanding non-expandable pvc
Jun 15 03:26:41.670: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI}
Jun 15 03:26:41.958: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:44.248: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:46.250: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:48.249: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:50.248: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:52.272: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:54.249: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:56.247: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:26:58.247: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:00.247: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:02.247: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:04.248: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:06.248: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:08.250: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:10.248: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:12.247: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:12.539: INFO: Error updating pvc csi-hostpathld6pk: persistentvolumeclaims "csi-hostpathld6pk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
[1mSTEP[0m: Deleting pvc
Jun 15 03:27:12.539: INFO: Deleting PersistentVolumeClaim "csi-hostpathld6pk"
Jun 15 03:27:12.686: INFO: Waiting up to 5m0s for PersistentVolume pvc-85678789-23c7-480c-8374-51440a4e0a90 to get deleted
Jun 15 03:27:12.830: INFO: PersistentVolume pvc-85678789-23c7-480c-8374-51440a4e0a90 was removed
[1mSTEP[0m: Deleting sc
[1mSTEP[0m: deleting the test namespace: volume-expand-1669
... skipping 52 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (block volmode)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not allow expansion of pvcs without AllowVolumeExpansion property
[90mtest/e2e/storage/testsuites/volume_expand.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":13,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 113 lines ...
Jun 15 03:27:16.063: INFO: PersistentVolumeClaim pvc-b2tdg found but phase is Pending instead of Bound.
Jun 15 03:27:18.213: INFO: PersistentVolumeClaim pvc-b2tdg found and phase=Bound (2.293412637s)
Jun 15 03:27:18.213: INFO: Waiting up to 3m0s for PersistentVolume local-5b5xc to have phase Bound
Jun 15 03:27:18.357: INFO: PersistentVolume local-5b5xc found and phase=Bound (143.855193ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4sq8
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:27:18.793: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4sq8" in namespace "provisioning-5608" to be "Succeeded or Failed"
Jun 15 03:27:18.943: INFO: Pod "pod-subpath-test-preprovisionedpv-4sq8": Phase="Pending", Reason="", readiness=false. Elapsed: 149.751526ms
Jun 15 03:27:21.088: INFO: Pod "pod-subpath-test-preprovisionedpv-4sq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294612049s
Jun 15 03:27:23.232: INFO: Pod "pod-subpath-test-preprovisionedpv-4sq8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438845038s
Jun 15 03:27:25.377: INFO: Pod "pod-subpath-test-preprovisionedpv-4sq8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584089706s
Jun 15 03:27:27.522: INFO: Pod "pod-subpath-test-preprovisionedpv-4sq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.728701795s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:27.522: INFO: Pod "pod-subpath-test-preprovisionedpv-4sq8" satisfied condition "Succeeded or Failed"
Jun 15 03:27:27.666: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-4sq8 container test-container-subpath-preprovisionedpv-4sq8: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:27.960: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4sq8 to disappear
Jun 15 03:27:28.104: INFO: Pod pod-subpath-test-preprovisionedpv-4sq8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4sq8
Jun 15 03:27:28.104: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4sq8" in namespace "provisioning-5608"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:33.021: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:188
... skipping 65 lines ...
[1mSTEP[0m: Building a namespace api object, basename containers
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test override all
Jun 15 03:27:30.959: INFO: Waiting up to 5m0s for pod "client-containers-b620ce3a-3e8f-4263-83a2-e5c9b3bcbca3" in namespace "containers-3624" to be "Succeeded or Failed"
Jun 15 03:27:31.103: INFO: Pod "client-containers-b620ce3a-3e8f-4263-83a2-e5c9b3bcbca3": Phase="Pending", Reason="", readiness=false. Elapsed: 143.856548ms
Jun 15 03:27:33.247: INFO: Pod "client-containers-b620ce3a-3e8f-4263-83a2-e5c9b3bcbca3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287877286s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:33.247: INFO: Pod "client-containers-b620ce3a-3e8f-4263-83a2-e5c9b3bcbca3" satisfied condition "Succeeded or Failed"
Jun 15 03:27:33.391: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod client-containers-b620ce3a-3e8f-4263-83a2-e5c9b3bcbca3 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:33.692: INFO: Waiting for pod client-containers-b620ce3a-3e8f-4263-83a2-e5c9b3bcbca3 to disappear
Jun 15 03:27:33.836: INFO: Pod client-containers-b620ce3a-3e8f-4263-83a2-e5c9b3bcbca3 no longer exists
[AfterEach] [sig-node] Containers
test/e2e/framework/framework.go:188
Jun 15 03:27:33.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "containers-3624" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:34.155: INFO: Only supported for providers [openstack] (not aws)
... skipping 47 lines ...
Jun 15 03:27:15.892: INFO: PersistentVolumeClaim pvc-xzkh8 found but phase is Pending instead of Bound.
Jun 15 03:27:18.039: INFO: PersistentVolumeClaim pvc-xzkh8 found and phase=Bound (15.159529725s)
Jun 15 03:27:18.039: INFO: Waiting up to 3m0s for PersistentVolume local-sg7jx to have phase Bound
Jun 15 03:27:18.182: INFO: PersistentVolume local-sg7jx found and phase=Bound (143.684814ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-q26v
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:27:18.633: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q26v" in namespace "provisioning-7114" to be "Succeeded or Failed"
Jun 15 03:27:18.777: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Pending", Reason="", readiness=false. Elapsed: 143.716515ms
Jun 15 03:27:20.922: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288148459s
Jun 15 03:27:23.067: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433120436s
Jun 15 03:27:25.211: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577836723s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:25.211: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v" satisfied condition "Succeeded or Failed"
Jun 15 03:27:25.355: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-q26v container test-container-subpath-preprovisionedpv-q26v: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:25.648: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q26v to disappear
Jun 15 03:27:25.791: INFO: Pod pod-subpath-test-preprovisionedpv-q26v no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-q26v
Jun 15 03:27:25.791: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q26v" in namespace "provisioning-7114"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-q26v
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:27:26.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q26v" in namespace "provisioning-7114" to be "Succeeded or Failed"
Jun 15 03:27:26.222: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Pending", Reason="", readiness=false. Elapsed: 143.329915ms
Jun 15 03:27:28.366: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287548868s
Jun 15 03:27:30.511: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432653778s
Jun 15 03:27:32.657: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578700341s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:32.657: INFO: Pod "pod-subpath-test-preprovisionedpv-q26v" satisfied condition "Succeeded or Failed"
Jun 15 03:27:32.801: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-q26v container test-container-subpath-preprovisionedpv-q26v: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:33.101: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q26v to disappear
Jun 15 03:27:33.244: INFO: Pod pod-subpath-test-preprovisionedpv-q26v no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-q26v
Jun 15 03:27:33.244: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q26v" in namespace "provisioning-7114"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:35.234: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:188
... skipping 152 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support two pods which have the same volume definition
[90mtest/e2e/storage/testsuites/ephemeral.go:216[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":3,"skipped":27,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:38.097: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 25 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
Jun 15 03:27:34.243: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-286774ae-d2e9-477a-a557-2627f553cad2" in namespace "security-context-test-7371" to be "Succeeded or Failed"
Jun 15 03:27:34.387: INFO: Pod "busybox-readonly-false-286774ae-d2e9-477a-a557-2627f553cad2": Phase="Pending", Reason="", readiness=false. Elapsed: 144.034255ms
Jun 15 03:27:36.532: INFO: Pod "busybox-readonly-false-286774ae-d2e9-477a-a557-2627f553cad2": Phase="Running", Reason="", readiness=false. Elapsed: 2.28921387s
Jun 15 03:27:38.677: INFO: Pod "busybox-readonly-false-286774ae-d2e9-477a-a557-2627f553cad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433481817s
Jun 15 03:27:38.677: INFO: Pod "busybox-readonly-false-286774ae-d2e9-477a-a557-2627f553cad2" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:188
Jun 15 03:27:38.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-7371" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
When creating a pod with readOnlyRootFilesystem
[90mtest/e2e/common/node/security_context.go:173[0m
should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":95,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:38.977: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/framework/framework.go:188
... skipping 82 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support multiple inline ephemeral volumes
[90mtest/e2e/storage/testsuites/ephemeral.go:254[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":7,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:39.576: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:188
... skipping 85 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating secret with name secret-test-978902fa-1e3b-4caa-99d0-04baae4f8109
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 15 03:27:35.493: INFO: Waiting up to 5m0s for pod "pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0" in namespace "secrets-5277" to be "Succeeded or Failed"
Jun 15 03:27:35.637: INFO: Pod "pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 144.17234ms
Jun 15 03:27:37.782: INFO: Pod "pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288623805s
Jun 15 03:27:39.927: INFO: Pod "pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433798296s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:39.927: INFO: Pod "pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0" satisfied condition "Succeeded or Failed"
Jun 15 03:27:40.072: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0 container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:40.373: INFO: Waiting for pod pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0 to disappear
Jun 15 03:27:40.517: INFO: Pod pod-secrets-e987261d-7f94-400b-910b-2e5f0f37efb0 no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.624 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":64,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:40.825: INFO: Only supported for providers [openstack] (not aws)
... skipping 91 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367
Jun 15 03:27:20.107: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 15 03:27:20.398: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6577" in namespace "provisioning-6577" to be "Succeeded or Failed"
Jun 15 03:27:20.542: INFO: Pod "hostpath-symlink-prep-provisioning-6577": Phase="Pending", Reason="", readiness=false. Elapsed: 144.274869ms
Jun 15 03:27:22.688: INFO: Pod "hostpath-symlink-prep-provisioning-6577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290543976s
Jun 15 03:27:24.833: INFO: Pod "hostpath-symlink-prep-provisioning-6577": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434753143s
Jun 15 03:27:26.978: INFO: Pod "hostpath-symlink-prep-provisioning-6577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580124821s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:26.978: INFO: Pod "hostpath-symlink-prep-provisioning-6577" satisfied condition "Succeeded or Failed"
Jun 15 03:27:26.978: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6577" in namespace "provisioning-6577"
Jun 15 03:27:27.126: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6577" to be fully deleted
Jun 15 03:27:27.270: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-cgrt
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:27:27.415: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-cgrt" in namespace "provisioning-6577" to be "Succeeded or Failed"
Jun 15 03:27:27.559: INFO: Pod "pod-subpath-test-inlinevolume-cgrt": Phase="Pending", Reason="", readiness=false. Elapsed: 143.94713ms
Jun 15 03:27:29.704: INFO: Pod "pod-subpath-test-inlinevolume-cgrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28935814s
Jun 15 03:27:31.850: INFO: Pod "pod-subpath-test-inlinevolume-cgrt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435117517s
Jun 15 03:27:33.995: INFO: Pod "pod-subpath-test-inlinevolume-cgrt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580408399s
Jun 15 03:27:36.143: INFO: Pod "pod-subpath-test-inlinevolume-cgrt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.727617711s
Jun 15 03:27:38.288: INFO: Pod "pod-subpath-test-inlinevolume-cgrt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.872862187s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:38.288: INFO: Pod "pod-subpath-test-inlinevolume-cgrt" satisfied condition "Succeeded or Failed"
Jun 15 03:27:38.432: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-inlinevolume-cgrt container test-container-subpath-inlinevolume-cgrt: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:38.727: INFO: Waiting for pod pod-subpath-test-inlinevolume-cgrt to disappear
Jun 15 03:27:38.871: INFO: Pod pod-subpath-test-inlinevolume-cgrt no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-cgrt
Jun 15 03:27:38.871: INFO: Deleting pod "pod-subpath-test-inlinevolume-cgrt" in namespace "provisioning-6577"
[1mSTEP[0m: Deleting pod
Jun 15 03:27:39.015: INFO: Deleting pod "pod-subpath-test-inlinevolume-cgrt" in namespace "provisioning-6577"
Jun 15 03:27:39.314: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6577" in namespace "provisioning-6577" to be "Succeeded or Failed"
Jun 15 03:27:39.459: INFO: Pod "hostpath-symlink-prep-provisioning-6577": Phase="Pending", Reason="", readiness=false. Elapsed: 144.319863ms
Jun 15 03:27:41.603: INFO: Pod "hostpath-symlink-prep-provisioning-6577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288638861s
Jun 15 03:27:43.749: INFO: Pod "hostpath-symlink-prep-provisioning-6577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434403191s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:43.749: INFO: Pod "hostpath-symlink-prep-provisioning-6577" satisfied condition "Succeeded or Failed"
Jun 15 03:27:43.749: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6577" in namespace "provisioning-6577"
Jun 15 03:27:43.898: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6577" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:188
Jun 15 03:27:44.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-6577" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":28,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:44.351: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
[BeforeEach] [sig-node] Pods
test/e2e/common/node/pods.go:191
[It] should contain environment variables for services [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
Jun 15 03:27:36.600: INFO: The status of Pod server-envvars-8fa98725-97cc-4985-ad35-1600be814182 is Pending, waiting for it to be Running (with Ready = true)
Jun 15 03:27:38.744: INFO: The status of Pod server-envvars-8fa98725-97cc-4985-ad35-1600be814182 is Running (Ready = true)
Jun 15 03:27:39.178: INFO: Waiting up to 5m0s for pod "client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712" in namespace "pods-1526" to be "Succeeded or Failed"
Jun 15 03:27:39.324: INFO: Pod "client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712": Phase="Pending", Reason="", readiness=false. Elapsed: 145.596559ms
Jun 15 03:27:41.468: INFO: Pod "client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290246927s
Jun 15 03:27:43.612: INFO: Pod "client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434445484s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:43.613: INFO: Pod "client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712" satisfied condition "Succeeded or Failed"
Jun 15 03:27:43.756: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712 container env3cont: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:44.050: INFO: Waiting for pod client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712 to disappear
Jun 15 03:27:44.195: INFO: Pod client-envvars-21c41b9f-a7b7-47e7-a594-2d26e9ef1712 no longer exists
[AfterEach] [sig-node] Pods
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:9.182 seconds][0m
[sig-node] Pods
[90mtest/e2e/common/node/framework.go:23[0m
should contain environment variables for services [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":73,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:44.498: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/configmap_volume.go:112
[1mSTEP[0m: Creating configMap with name configmap-test-volume-map-30b27a81-ed67-4b31-87c7-6fd85f2f5da2
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:27:40.944: INFO: Waiting up to 5m0s for pod "pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e" in namespace "configmap-5047" to be "Succeeded or Failed"
Jun 15 03:27:41.088: INFO: Pod "pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e": Phase="Pending", Reason="", readiness=false. Elapsed: 144.276728ms
Jun 15 03:27:43.232: INFO: Pod "pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288654958s
Jun 15 03:27:45.378: INFO: Pod "pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433991899s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:45.378: INFO: Pod "pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e" satisfied condition "Succeeded or Failed"
Jun 15 03:27:45.522: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:45.823: INFO: Waiting for pod pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e to disappear
Jun 15 03:27:45.967: INFO: Pod pod-configmaps-5aef0c1c-4f24-466a-bdd3-2edde19a318e no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.627 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/configmap_volume.go:112[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":40,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:46.289: INFO: Only supported for providers [azure] (not aws)
... skipping 90 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":45,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:46.562: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir-link-bindmounted]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 14 lines ...
[1mSTEP[0m: Looking for a node to schedule stateful set and pod
[1mSTEP[0m: Creating pod with conflicting port in namespace statefulset-7089
[1mSTEP[0m: Waiting until pod test-pod will start running in namespace statefulset-7089
[1mSTEP[0m: Creating statefulset with conflicting port in namespace statefulset-7089
[1mSTEP[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7089
Jun 15 03:27:27.490: INFO: Observed stateful pod in namespace: statefulset-7089, name: ss-0, uid: 997648f4-92fd-47df-9f9b-7f7d63a416c5, status phase: Pending. Waiting for statefulset controller to delete.
Jun 15 03:27:28.679: INFO: Observed stateful pod in namespace: statefulset-7089, name: ss-0, uid: 997648f4-92fd-47df-9f9b-7f7d63a416c5, status phase: Failed. Waiting for statefulset controller to delete.
Jun 15 03:27:28.685: INFO: Observed stateful pod in namespace: statefulset-7089, name: ss-0, uid: 997648f4-92fd-47df-9f9b-7f7d63a416c5, status phase: Failed. Waiting for statefulset controller to delete.
Jun 15 03:27:28.687: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7089
[1mSTEP[0m: Removing pod with conflicting port in namespace statefulset-7089
[1mSTEP[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7089 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
test/e2e/apps/statefulset.go:122
Jun 15 03:27:35.426: INFO: Deleting all statefulset in ns statefulset-7089
... skipping 11 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
Should recreate evicted statefulset [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:26:42.445: INFO: >>> kubeConfig: /root/.kube/config
... skipping 58 lines ...
Jun 15 03:26:50.852: INFO: PersistentVolumeClaim csi-hostpathmq5pr found but phase is Pending instead of Bound.
Jun 15 03:26:52.997: INFO: PersistentVolumeClaim csi-hostpathmq5pr found but phase is Pending instead of Bound.
Jun 15 03:26:55.141: INFO: PersistentVolumeClaim csi-hostpathmq5pr found but phase is Pending instead of Bound.
Jun 15 03:26:57.285: INFO: PersistentVolumeClaim csi-hostpathmq5pr found and phase=Bound (6.576543438s)
[1mSTEP[0m: Expanding non-expandable pvc
Jun 15 03:26:57.572: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI}
Jun 15 03:26:57.865: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:00.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:02.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:04.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:06.157: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:08.160: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:10.156: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:12.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:14.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:16.154: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:18.159: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:20.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:22.179: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:24.183: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:26.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:28.153: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 15 03:27:28.441: INFO: Error updating pvc csi-hostpathmq5pr: persistentvolumeclaims "csi-hostpathmq5pr" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
[1mSTEP[0m: Deleting pvc
Jun 15 03:27:28.441: INFO: Deleting PersistentVolumeClaim "csi-hostpathmq5pr"
Jun 15 03:27:28.587: INFO: Waiting up to 5m0s for PersistentVolume pvc-a5de13f4-0fb6-4c18-b011-3680b69a0d86 to get deleted
Jun 15 03:27:28.731: INFO: PersistentVolume pvc-a5de13f4-0fb6-4c18-b011-3680b69a0d86 was removed
[1mSTEP[0m: Deleting sc
[1mSTEP[0m: deleting the test namespace: volume-expand-6751
... skipping 52 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not allow expansion of pvcs without AllowVolumeExpansion property
[90mtest/e2e/storage/testsuites/volume_expand.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 23 lines ...
[32m• [SLOW TEST:15.898 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a pod. [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:47.345: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: cinder]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [openstack] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
... skipping 53 lines ...
Jun 15 03:27:22.187: INFO: Unable to read jessie_udp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:22.333: INFO: Unable to read jessie_tcp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:22.482: INFO: Unable to read jessie_udp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:22.626: INFO: Unable to read jessie_tcp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:22.770: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:22.914: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:23.490: INFO: Lookups using dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4197 wheezy_tcp@dns-test-service.dns-4197 wheezy_udp@dns-test-service.dns-4197.svc wheezy_tcp@dns-test-service.dns-4197.svc wheezy_udp@_http._tcp.dns-test-service.dns-4197.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4197.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4197 jessie_tcp@dns-test-service.dns-4197 jessie_udp@dns-test-service.dns-4197.svc jessie_tcp@dns-test-service.dns-4197.svc jessie_udp@_http._tcp.dns-test-service.dns-4197.svc jessie_tcp@_http._tcp.dns-test-service.dns-4197.svc]
Jun 15 03:27:28.636: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:28.780: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:28.924: INFO: Unable to read wheezy_udp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:29.069: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:29.213: INFO: Unable to read wheezy_udp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
... skipping 5 lines ...
Jun 15 03:27:30.665: INFO: Unable to read jessie_udp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:30.809: INFO: Unable to read jessie_tcp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:30.954: INFO: Unable to read jessie_udp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:31.098: INFO: Unable to read jessie_tcp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:31.242: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:31.389: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:31.965: INFO: Lookups using dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4197 wheezy_tcp@dns-test-service.dns-4197 wheezy_udp@dns-test-service.dns-4197.svc wheezy_tcp@dns-test-service.dns-4197.svc wheezy_udp@_http._tcp.dns-test-service.dns-4197.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4197.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4197 jessie_tcp@dns-test-service.dns-4197 jessie_udp@dns-test-service.dns-4197.svc jessie_tcp@dns-test-service.dns-4197.svc jessie_udp@_http._tcp.dns-test-service.dns-4197.svc jessie_tcp@_http._tcp.dns-test-service.dns-4197.svc]
Jun 15 03:27:33.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:33.779: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:33.923: INFO: Unable to read wheezy_udp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:34.069: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:34.213: INFO: Unable to read wheezy_udp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
... skipping 5 lines ...
Jun 15 03:27:35.682: INFO: Unable to read jessie_udp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:35.826: INFO: Unable to read jessie_tcp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:35.970: INFO: Unable to read jessie_udp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:36.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:36.259: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:36.403: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:36.980: INFO: Lookups using dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4197 wheezy_tcp@dns-test-service.dns-4197 wheezy_udp@dns-test-service.dns-4197.svc wheezy_tcp@dns-test-service.dns-4197.svc wheezy_udp@_http._tcp.dns-test-service.dns-4197.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4197.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4197 jessie_tcp@dns-test-service.dns-4197 jessie_udp@dns-test-service.dns-4197.svc jessie_tcp@dns-test-service.dns-4197.svc jessie_udp@_http._tcp.dns-test-service.dns-4197.svc jessie_tcp@_http._tcp.dns-test-service.dns-4197.svc]
Jun 15 03:27:38.636: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:38.780: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:38.925: INFO: Unable to read wheezy_udp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:39.070: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4197 from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:39.214: INFO: Unable to read wheezy_udp@dns-test-service.dns-4197.svc from pod dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937: the server could not find the requested resource (get pods dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937)
Jun 15 03:27:41.962: INFO: Lookups using dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4197 wheezy_tcp@dns-test-service.dns-4197 wheezy_udp@dns-test-service.dns-4197.svc]
Jun 15 03:27:46.978: INFO: DNS probes using dns-4197/dns-test-018b8878-8eb3-48b7-8a96-8d50d8cf8937 succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test service
[1mSTEP[0m: deleting the test headless service
... skipping 6 lines ...
[32m• [SLOW TEST:41.582 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":72,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:47.746: INFO: Only supported for providers [azure] (not aws)
... skipping 64 lines ...
[32m• [SLOW TEST:7.117 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should update labels on modification [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":71,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:48.007: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: blockfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 15 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:27:49.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-1729" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:49.415: INFO: Only supported for providers [openstack] (not aws)
... skipping 119 lines ...
Jun 15 03:27:16.190: INFO: PersistentVolumeClaim pvc-8qpwn found but phase is Pending instead of Bound.
Jun 15 03:27:18.334: INFO: PersistentVolumeClaim pvc-8qpwn found and phase=Bound (6.578013053s)
Jun 15 03:27:18.334: INFO: Waiting up to 3m0s for PersistentVolume local-4l6m2 to have phase Bound
Jun 15 03:27:18.483: INFO: PersistentVolume local-4l6m2 found and phase=Bound (148.17217ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-dzd2
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 15 03:27:18.923: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dzd2" in namespace "provisioning-9802" to be "Succeeded or Failed"
Jun 15 03:27:19.068: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Pending", Reason="", readiness=false. Elapsed: 144.661093ms
Jun 15 03:27:21.214: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 2.290239567s
Jun 15 03:27:23.360: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 4.436014621s
Jun 15 03:27:25.504: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 6.580506426s
Jun 15 03:27:27.649: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 8.725622513s
Jun 15 03:27:29.795: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 10.871491615s
... skipping 2 lines ...
Jun 15 03:27:36.229: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 17.305568439s
Jun 15 03:27:38.373: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 19.449986196s
Jun 15 03:27:40.519: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=true. Elapsed: 21.595513411s
Jun 15 03:27:42.664: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Running", Reason="", readiness=false. Elapsed: 23.7405639s
Jun 15 03:27:44.809: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.885915923s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:44.810: INFO: Pod "pod-subpath-test-preprovisionedpv-dzd2" satisfied condition "Succeeded or Failed"
Jun 15 03:27:44.953: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-dzd2 container test-container-subpath-preprovisionedpv-dzd2: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:45.250: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dzd2 to disappear
Jun 15 03:27:45.394: INFO: Pod pod-subpath-test-preprovisionedpv-dzd2 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-dzd2
Jun 15 03:27:45.394: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dzd2" in namespace "provisioning-9802"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":28,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:50.291: INFO: Only supported for providers [azure] (not aws)
... skipping 48 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:27:51.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "networkpolicies-7543" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":5,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 124 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (ext4)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":6,"skipped":74,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:52.439: INFO: Only supported for providers [vsphere] (not aws)
... skipping 130 lines ...
Jun 15 03:27:35.434: INFO: Waiting for pod aws-client to disappear
Jun 15 03:27:35.579: INFO: Pod aws-client no longer exists
[1mSTEP[0m: cleaning the environment after aws
[1mSTEP[0m: Deleting pv and pvc
Jun 15 03:27:35.579: INFO: Deleting PersistentVolumeClaim "pvc-xxbzd"
Jun 15 03:27:35.724: INFO: Deleting PersistentVolume "aws-fzmws"
Jun 15 03:27:36.582: INFO: Couldn't delete PD "aws://sa-east-1a/vol-09160721aede9a47c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09160721aede9a47c is currently attached to i-05fe3937684c9d649
status code: 400, request id: 40fda644-ef8e-404f-bc40-ba8b778defb1
Jun 15 03:27:42.318: INFO: Couldn't delete PD "aws://sa-east-1a/vol-09160721aede9a47c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09160721aede9a47c is currently attached to i-05fe3937684c9d649
status code: 400, request id: 265c2553-0293-4bc7-881e-d8b93d77d3b4
Jun 15 03:27:48.049: INFO: Couldn't delete PD "aws://sa-east-1a/vol-09160721aede9a47c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09160721aede9a47c is currently attached to i-05fe3937684c9d649
status code: 400, request id: 06b0b1f6-d03f-491b-b845-ecfc92a3a888
Jun 15 03:27:53.813: INFO: Successfully deleted PD "aws://sa-east-1a/vol-09160721aede9a47c".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/framework/framework.go:188
Jun 15 03:27:53.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-2319" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (block volmode)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":37,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 28 lines ...
[32m• [SLOW TEST:7.556 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should deny crd creation [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:54.881: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename protocol
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:27:56.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "protocol-3449" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":6,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:56.629: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:188
... skipping 137 lines ...
[90mtest/e2e/kubectl/framework.go:23[0m
Simple pod
[90mtest/e2e/kubectl/kubectl.go:380[0m
should support exec through an HTTP proxy
[90mtest/e2e/kubectl/kubectl.go:440[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":8,"skipped":100,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:56.716: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:188
... skipping 66 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating secret with name secret-test-cde14209-c6fa-43f5-a1b3-1362486c308d
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 15 03:27:50.772: INFO: Waiting up to 5m0s for pod "pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21" in namespace "secrets-6722" to be "Succeeded or Failed"
Jun 15 03:27:50.920: INFO: Pod "pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21": Phase="Pending", Reason="", readiness=false. Elapsed: 148.001673ms
Jun 15 03:27:53.065: INFO: Pod "pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293020656s
Jun 15 03:27:55.211: INFO: Pod "pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438114786s
Jun 15 03:27:57.356: INFO: Pod "pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.5838056s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:57.356: INFO: Pod "pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21" satisfied condition "Succeeded or Failed"
Jun 15 03:27:57.500: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21 container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:57.798: INFO: Waiting for pod pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21 to disappear
Jun 15 03:27:57.942: INFO: Pod pod-secrets-3f0cf098-3df0-45c3-97c5-88799ad05c21 no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.760 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":96,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 57 lines ...
Jun 15 03:27:47.262: INFO: PersistentVolumeClaim pvc-4dfrj found but phase is Pending instead of Bound.
Jun 15 03:27:49.408: INFO: PersistentVolumeClaim pvc-4dfrj found and phase=Bound (4.434466246s)
Jun 15 03:27:49.408: INFO: Waiting up to 3m0s for PersistentVolume local-sckv6 to have phase Bound
Jun 15 03:27:49.553: INFO: PersistentVolume local-sckv6 found and phase=Bound (145.354705ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xnll
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:27:49.986: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xnll" in namespace "provisioning-4227" to be "Succeeded or Failed"
Jun 15 03:27:50.132: INFO: Pod "pod-subpath-test-preprovisionedpv-xnll": Phase="Pending", Reason="", readiness=false. Elapsed: 145.304619ms
Jun 15 03:27:52.276: INFO: Pod "pod-subpath-test-preprovisionedpv-xnll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290007267s
Jun 15 03:27:54.422: INFO: Pod "pod-subpath-test-preprovisionedpv-xnll": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435962414s
Jun 15 03:27:56.567: INFO: Pod "pod-subpath-test-preprovisionedpv-xnll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580817387s
[1mSTEP[0m: Saw pod success
Jun 15 03:27:56.567: INFO: Pod "pod-subpath-test-preprovisionedpv-xnll" satisfied condition "Succeeded or Failed"
Jun 15 03:27:56.711: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-xnll container test-container-subpath-preprovisionedpv-xnll: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:27:57.009: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xnll to disappear
Jun 15 03:27:57.154: INFO: Pod pod-subpath-test-preprovisionedpv-xnll no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xnll
Jun 15 03:27:57.154: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xnll" in namespace "provisioning-4227"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":35,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:27:59.202: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 113 lines ...
[32m• [SLOW TEST:61.493 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should delete jobs and pods created by cronjob
[90mtest/e2e/apimachinery/garbage_collector.go:1145[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":6,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:02.955: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/framework/framework.go:188
... skipping 129 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":107,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:03.079: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 71 lines ...
Jun 15 03:28:01.431: INFO: PersistentVolumeClaim pvc-z78mw found but phase is Pending instead of Bound.
Jun 15 03:28:03.577: INFO: PersistentVolumeClaim pvc-z78mw found and phase=Bound (6.579828639s)
Jun 15 03:28:03.577: INFO: Waiting up to 3m0s for PersistentVolume local-8pcr9 to have phase Bound
Jun 15 03:28:03.721: INFO: PersistentVolume local-8pcr9 found and phase=Bound (144.096957ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4xm7
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:28:04.166: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4xm7" in namespace "provisioning-4962" to be "Succeeded or Failed"
Jun 15 03:28:04.313: INFO: Pod "pod-subpath-test-preprovisionedpv-4xm7": Phase="Pending", Reason="", readiness=false. Elapsed: 146.608855ms
Jun 15 03:28:06.458: INFO: Pod "pod-subpath-test-preprovisionedpv-4xm7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291024059s
Jun 15 03:28:08.604: INFO: Pod "pod-subpath-test-preprovisionedpv-4xm7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436958812s
Jun 15 03:28:10.749: INFO: Pod "pod-subpath-test-preprovisionedpv-4xm7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.582703859s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:10.749: INFO: Pod "pod-subpath-test-preprovisionedpv-4xm7" satisfied condition "Succeeded or Failed"
Jun 15 03:28:10.894: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-4xm7 container test-container-subpath-preprovisionedpv-4xm7: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:11.188: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4xm7 to disappear
Jun 15 03:28:11.332: INFO: Pod pod-subpath-test-preprovisionedpv-4xm7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4xm7
Jun 15 03:28:11.332: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4xm7" in namespace "provisioning-4962"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":39,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] PreStop
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 32 lines ...
[32m• [SLOW TEST:11.765 seconds][0m
[sig-node] PreStop
[90mtest/e2e/node/framework.go:23[0m
should call prestop when killing a pod [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:14.757: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/framework/framework.go:188
... skipping 66 lines ...
Jun 15 03:27:46.326: INFO: PersistentVolumeClaim pvc-6fkf9 found but phase is Pending instead of Bound.
Jun 15 03:27:48.472: INFO: PersistentVolumeClaim pvc-6fkf9 found and phase=Bound (13.018492359s)
Jun 15 03:27:48.472: INFO: Waiting up to 3m0s for PersistentVolume local-6bpsm to have phase Bound
Jun 15 03:27:48.617: INFO: PersistentVolume local-6bpsm found and phase=Bound (144.440823ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-rt5j
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 15 03:27:49.051: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rt5j" in namespace "provisioning-2930" to be "Succeeded or Failed"
Jun 15 03:27:49.196: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Pending", Reason="", readiness=false. Elapsed: 144.743503ms
Jun 15 03:27:51.342: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29053686s
Jun 15 03:27:53.486: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435381567s
Jun 15 03:27:55.632: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Running", Reason="", readiness=true. Elapsed: 6.581115195s
Jun 15 03:27:57.778: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Running", Reason="", readiness=true. Elapsed: 8.726483138s
Jun 15 03:27:59.923: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Running", Reason="", readiness=true. Elapsed: 10.872127569s
... skipping 2 lines ...
Jun 15 03:28:06.363: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Running", Reason="", readiness=true. Elapsed: 17.311745973s
Jun 15 03:28:08.509: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Running", Reason="", readiness=true. Elapsed: 19.457531049s
Jun 15 03:28:10.654: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Running", Reason="", readiness=true. Elapsed: 21.602805484s
Jun 15 03:28:12.799: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Running", Reason="", readiness=true. Elapsed: 23.747979445s
Jun 15 03:28:14.945: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.893607458s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:14.945: INFO: Pod "pod-subpath-test-preprovisionedpv-rt5j" satisfied condition "Succeeded or Failed"
Jun 15 03:28:15.089: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-rt5j container test-container-subpath-preprovisionedpv-rt5j: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:15.391: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rt5j to disappear
Jun 15 03:28:15.536: INFO: Pod pod-subpath-test-preprovisionedpv-rt5j no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-rt5j
Jun 15 03:28:15.536: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rt5j" in namespace "provisioning-2930"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":42,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 91 lines ...
Jun 15 03:26:17.170: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4196
Jun 15 03:26:17.322: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4196
Jun 15 03:26:17.468: INFO: creating *v1.StatefulSet: csi-mock-volumes-4196-4718/csi-mockplugin
Jun 15 03:26:17.615: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4196
Jun 15 03:26:17.761: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4196"
Jun 15 03:26:17.909: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4196 to register on node i-0a5092cc559ae3bff
I0615 03:26:24.897854 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0615 03:26:25.060025 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4196","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0615 03:26:25.204323 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0615 03:26:25.360076 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0615 03:26:25.641276 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4196","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0615 03:26:26.384411 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4196"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod with fsGroup
Jun 15 03:26:35.042: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 15 03:26:35.188: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-gnzs5] to have phase Bound
I0615 03:26:35.195156 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-748f53db-c625-4e4a-b924-dea908491ac2","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-748f53db-c625-4e4a-b924-dea908491ac2"}}},"Error":"","FullError":null}
Jun 15 03:26:35.333: INFO: PersistentVolumeClaim pvc-gnzs5 found but phase is Pending instead of Bound.
Jun 15 03:26:37.480: INFO: PersistentVolumeClaim pvc-gnzs5 found and phase=Bound (2.291659752s)
I0615 03:26:38.091913 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:26:38.236523 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:26:38.381046 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 15 03:26:38.526: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:26:38.527: INFO: ExecWithOptions: Clientset creation
Jun 15 03:26:38.527: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-4196-4718/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-4196%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-4196%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
I0615 03:26:39.476276 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-4196/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-748f53db-c625-4e4a-b924-dea908491ac2","storage.kubernetes.io/csiProvisionerIdentity":"1655263585433-8081-csi-mock-csi-mock-volumes-4196"}},"Response":{},"Error":"","FullError":null}
I0615 03:26:39.622159 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:26:39.769323 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:26:39.914644 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 15 03:26:40.059: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:26:40.060: INFO: ExecWithOptions: Clientset creation
Jun 15 03:26:40.060: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-4196-4718/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Fd5feb3e2-aa91-4963-ba48-c6ef71f48bc4%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-748f53db-c625-4e4a-b924-dea908491ac2%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Fd5feb3e2-aa91-4963-ba48-c6ef71f48bc4%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-748f53db-c625-4e4a-b924-dea908491ac2%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 15 03:26:41.033: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:26:41.033: INFO: ExecWithOptions: Clientset creation
Jun 15 03:26:41.033: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-4196-4718/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Fd5feb3e2-aa91-4963-ba48-c6ef71f48bc4%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-748f53db-c625-4e4a-b924-dea908491ac2%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2Fd5feb3e2-aa91-4963-ba48-c6ef71f48bc4%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-748f53db-c625-4e4a-b924-dea908491ac2%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 15 03:26:41.962: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:26:41.963: INFO: ExecWithOptions: Clientset creation
Jun 15 03:26:41.963: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-4196-4718/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Fd5feb3e2-aa91-4963-ba48-c6ef71f48bc4%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-748f53db-c625-4e4a-b924-dea908491ac2%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0615 03:26:42.909315 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-4196/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/d5feb3e2-aa91-4963-ba48-c6ef71f48bc4/volumes/kubernetes.io~csi/pvc-748f53db-c625-4e4a-b924-dea908491ac2/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-748f53db-c625-4e4a-b924-dea908491ac2","storage.kubernetes.io/csiProvisionerIdentity":"1655263585433-8081-csi-mock-csi-mock-volumes-4196"}},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Deleting pod pvc-volume-tester-q886g
Jun 15 03:26:44.207: INFO: Deleting pod "pvc-volume-tester-q886g" in namespace "csi-mock-volumes-4196"
Jun 15 03:26:44.353: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q886g" to be fully deleted
Jun 15 03:27:16.117: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:27:16.118: INFO: ExecWithOptions: Clientset creation
Jun 15 03:27:16.118: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-4196-4718/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2Fd5feb3e2-aa91-4963-ba48-c6ef71f48bc4%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-748f53db-c625-4e4a-b924-dea908491ac2%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0615 03:27:17.106443 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d5feb3e2-aa91-4963-ba48-c6ef71f48bc4/volumes/kubernetes.io~csi/pvc-748f53db-c625-4e4a-b924-dea908491ac2/mount"},"Response":{},"Error":"","FullError":null}
I0615 03:27:17.324985 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:27:17.470073 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-4196/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Deleting claim pvc-gnzs5
Jun 15 03:27:18.947: INFO: Waiting up to 2m0s for PersistentVolume pvc-748f53db-c625-4e4a-b924-dea908491ac2 to get deleted
I0615 03:27:18.969728 6532 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
Jun 15 03:27:19.093: INFO: PersistentVolume pvc-748f53db-c625-4e4a-b924-dea908491ac2 found and phase=Released (145.66383ms)
Jun 15 03:27:21.239: INFO: PersistentVolume pvc-748f53db-c625-4e4a-b924-dea908491ac2 was removed
[1mSTEP[0m: Deleting storageclass csi-mock-volumes-4196-sck6bsc
[1mSTEP[0m: Cleaning up resources
[1mSTEP[0m: deleting the test namespace: csi-mock-volumes-4196
[1mSTEP[0m: Waiting for namespaces [csi-mock-volumes-4196] to vanish
... skipping 40 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Delegate FSGroup to CSI driver [LinuxOnly]
[90mtest/e2e/storage/csi_mock_volume.go:1719[0m
should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
[90mtest/e2e/storage/csi_mock_volume.go:1735[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP","total":-1,"completed":3,"skipped":33,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:24.727: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 122 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":45,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:24.789: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 184 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume at the same time
[90mtest/e2e/storage/persistent_volumes-local.go:250[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:251[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":48,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 24 lines ...
Jun 15 03:28:16.671: INFO: PersistentVolumeClaim pvc-qxf8k found but phase is Pending instead of Bound.
Jun 15 03:28:18.815: INFO: PersistentVolumeClaim pvc-qxf8k found and phase=Bound (15.16268641s)
Jun 15 03:28:18.815: INFO: Waiting up to 3m0s for PersistentVolume local-557nn to have phase Bound
Jun 15 03:28:18.960: INFO: PersistentVolume local-557nn found and phase=Bound (144.305495ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-7m24
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:28:19.401: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7m24" in namespace "provisioning-3853" to be "Succeeded or Failed"
Jun 15 03:28:19.546: INFO: Pod "pod-subpath-test-preprovisionedpv-7m24": Phase="Pending", Reason="", readiness=false. Elapsed: 144.28985ms
Jun 15 03:28:21.690: INFO: Pod "pod-subpath-test-preprovisionedpv-7m24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288754708s
Jun 15 03:28:23.835: INFO: Pod "pod-subpath-test-preprovisionedpv-7m24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433999944s
Jun 15 03:28:25.980: INFO: Pod "pod-subpath-test-preprovisionedpv-7m24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578304851s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:25.980: INFO: Pod "pod-subpath-test-preprovisionedpv-7m24" satisfied condition "Succeeded or Failed"
Jun 15 03:28:26.124: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-7m24 container test-container-volume-preprovisionedpv-7m24: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:26.422: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7m24 to disappear
Jun 15 03:28:26.568: INFO: Pod pod-subpath-test-preprovisionedpv-7m24 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-7m24
Jun 15 03:28:26.568: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7m24" in namespace "provisioning-3853"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":111,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
... skipping 128 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support two pods which have the same volume definition
[90mtest/e2e/storage/testsuites/ephemeral.go:216[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":4,"skipped":21,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Downward API
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:28:24.735: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 15 03:28:25.901: INFO: Waiting up to 5m0s for pod "downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5" in namespace "downward-api-5687" to be "Succeeded or Failed"
Jun 15 03:28:26.048: INFO: Pod "downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 147.081071ms
Jun 15 03:28:28.195: INFO: Pod "downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293809388s
Jun 15 03:28:30.353: INFO: Pod "downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.452288833s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:30.353: INFO: Pod "downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5" satisfied condition "Succeeded or Failed"
Jun 15 03:28:30.498: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:30.796: INFO: Waiting for pod downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5 to disappear
Jun 15 03:28:30.945: INFO: Pod downward-api-1c002c2b-8c45-43f0-bdc3-f30774a1b6c5 no longer exists
[AfterEach] [sig-node] Downward API
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.507 seconds][0m
[sig-node] Downward API
[90mtest/e2e/common/node/framework.go:23[0m
should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:31.252: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/framework/framework.go:188
... skipping 186 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI attach test using mock driver
[90mtest/e2e/storage/csi_mock_volume.go:332[0m
should preserve attachment policy when no CSIDriver present
[90mtest/e2e/storage/csi_mock_volume.go:360[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":7,"skipped":30,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:31.636: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 123 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":45,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 61 lines ...
[32m• [SLOW TEST:32.620 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should have session affinity work for NodePort service [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":51,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] RuntimeClass
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 7 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:28:33.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-5403" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:369
Jun 15 03:28:26.013: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c" in namespace "security-context-test-9607" to be "Succeeded or Failed"
Jun 15 03:28:26.156: INFO: Pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.434322ms
Jun 15 03:28:28.306: INFO: Pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293590677s
Jun 15 03:28:30.454: INFO: Pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440992205s
Jun 15 03:28:32.597: INFO: Pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584643536s
Jun 15 03:28:34.741: INFO: Pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728428889s
Jun 15 03:28:36.886: INFO: Pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.872806527s
Jun 15 03:28:36.886: INFO: Pod "alpine-nnp-true-28c6713d-ec3c-4595-a1ec-35f93721733c" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:188
Jun 15 03:28:37.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-9607" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when creating containers with AllowPrivilegeEscalation
[90mtest/e2e/common/node/security_context.go:298[0m
should allow privilege escalation when true [LinuxOnly] [NodeConformance]
[90mtest/e2e/common/node/security_context.go:369[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":60,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:37.370: INFO: Driver "local" does not provide raw block - skipping
... skipping 52 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Container restart
[90mtest/e2e/storage/subpath.go:122[0m
should verify that container can restart successfully after configmaps modified
[90mtest/e2e/storage/subpath.go:123[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":6,"skipped":29,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Downward API
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:28:31.723: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 15 03:28:32.888: INFO: Waiting up to 5m0s for pod "downward-api-4e0bc598-f68f-420a-8e40-38994e211965" in namespace "downward-api-8246" to be "Succeeded or Failed"
Jun 15 03:28:33.033: INFO: Pod "downward-api-4e0bc598-f68f-420a-8e40-38994e211965": Phase="Pending", Reason="", readiness=false. Elapsed: 145.26813ms
Jun 15 03:28:35.178: INFO: Pod "downward-api-4e0bc598-f68f-420a-8e40-38994e211965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289604577s
Jun 15 03:28:37.322: INFO: Pod "downward-api-4e0bc598-f68f-420a-8e40-38994e211965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434343423s
Jun 15 03:28:39.469: INFO: Pod "downward-api-4e0bc598-f68f-420a-8e40-38994e211965": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581358266s
Jun 15 03:28:41.615: INFO: Pod "downward-api-4e0bc598-f68f-420a-8e40-38994e211965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.727040112s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:41.615: INFO: Pod "downward-api-4e0bc598-f68f-420a-8e40-38994e211965" satisfied condition "Succeeded or Failed"
Jun 15 03:28:41.760: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod downward-api-4e0bc598-f68f-420a-8e40-38994e211965 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:42.056: INFO: Waiting for pod downward-api-4e0bc598-f68f-420a-8e40-38994e211965 to disappear
Jun 15 03:28:42.200: INFO: Pod downward-api-4e0bc598-f68f-420a-8e40-38994e211965 no longer exists
[AfterEach] [sig-node] Downward API
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:10.768 seconds][0m
[sig-node] Downward API
[90mtest/e2e/common/node/framework.go:23[0m
should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:42.502: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/framework/framework.go:188
... skipping 50 lines ...
[32m• [SLOW TEST:5.369 seconds][0m
[sig-auth] Certificates API [Privileged:ClusterAdmin]
[90mtest/e2e/auth/framework.go:23[0m
should support CSR API operations [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:44.003: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:188
... skipping 64 lines ...
Jun 15 03:28:03.331: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-7d8k8] to have phase Bound
Jun 15 03:28:03.475: INFO: PersistentVolumeClaim pvc-7d8k8 found and phase=Bound (143.786604ms)
[1mSTEP[0m: Deleting the previously created pod
Jun 15 03:28:16.198: INFO: Deleting pod "pvc-volume-tester-xxtqj" in namespace "csi-mock-volumes-1118"
Jun 15 03:28:16.343: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xxtqj" to be fully deleted
[1mSTEP[0m: Checking CSI driver logs
Jun 15 03:28:18.781: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"2a03ef82-ec5b-11ec-b004-6600b9b4041a","target_path":"/var/lib/kubelet/pods/7a6ae2ed-37a0-4649-a03d-35fe6eda914c/volumes/kubernetes.io~csi/pvc-b308d7fc-eea5-41c7-a861-18affe3f290f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
[1mSTEP[0m: Deleting pod pvc-volume-tester-xxtqj
Jun 15 03:28:18.781: INFO: Deleting pod "pvc-volume-tester-xxtqj" in namespace "csi-mock-volumes-1118"
[1mSTEP[0m: Deleting claim pvc-7d8k8
Jun 15 03:28:19.221: INFO: Waiting up to 2m0s for PersistentVolume pvc-b308d7fc-eea5-41c7-a861-18affe3f290f to get deleted
Jun 15 03:28:19.365: INFO: PersistentVolume pvc-b308d7fc-eea5-41c7-a861-18affe3f290f found and phase=Released (144.012365ms)
Jun 15 03:28:21.510: INFO: PersistentVolume pvc-b308d7fc-eea5-41c7-a861-18affe3f290f was removed
... skipping 44 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI workload information using mock driver
[90mtest/e2e/storage/csi_mock_volume.go:467[0m
should not be passed when CSIDriver does not exist
[90mtest/e2e/storage/csi_mock_volume.go:517[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":9,"skipped":45,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:44.818: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 45 lines ...
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 15 03:28:35.273: INFO: Waiting up to 5m0s for pod "security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee" in namespace "security-context-4612" to be "Succeeded or Failed"
Jun 15 03:28:35.418: INFO: Pod "security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee": Phase="Pending", Reason="", readiness=false. Elapsed: 144.329159ms
Jun 15 03:28:37.564: INFO: Pod "security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290478076s
Jun 15 03:28:39.710: INFO: Pod "security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436356018s
Jun 15 03:28:41.854: INFO: Pod "security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580888217s
Jun 15 03:28:44.000: INFO: Pod "security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.726623715s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:44.000: INFO: Pod "security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee" satisfied condition "Succeeded or Failed"
Jun 15 03:28:44.145: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:44.440: INFO: Waiting for pod security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee to disappear
Jun 15 03:28:44.585: INFO: Pod security-context-e2ec3b24-dcff-4ebe-a40c-b279557fcfee no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:10.759 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":54,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 29 lines ...
Jun 15 03:28:32.154: INFO: PersistentVolumeClaim pvc-644sx found but phase is Pending instead of Bound.
Jun 15 03:28:34.299: INFO: PersistentVolumeClaim pvc-644sx found and phase=Bound (15.158929989s)
Jun 15 03:28:34.299: INFO: Waiting up to 3m0s for PersistentVolume local-m4vvf to have phase Bound
Jun 15 03:28:34.443: INFO: PersistentVolume local-m4vvf found and phase=Bound (144.031219ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-rvhz
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:28:34.880: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rvhz" in namespace "provisioning-2209" to be "Succeeded or Failed"
Jun 15 03:28:35.024: INFO: Pod "pod-subpath-test-preprovisionedpv-rvhz": Phase="Pending", Reason="", readiness=false. Elapsed: 144.428469ms
Jun 15 03:28:37.169: INFO: Pod "pod-subpath-test-preprovisionedpv-rvhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289626292s
Jun 15 03:28:39.317: INFO: Pod "pod-subpath-test-preprovisionedpv-rvhz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437393751s
Jun 15 03:28:41.466: INFO: Pod "pod-subpath-test-preprovisionedpv-rvhz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585944501s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:41.466: INFO: Pod "pod-subpath-test-preprovisionedpv-rvhz" satisfied condition "Succeeded or Failed"
Jun 15 03:28:41.610: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-preprovisionedpv-rvhz container test-container-subpath-preprovisionedpv-rvhz: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:41.916: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rvhz to disappear
Jun 15 03:28:42.060: INFO: Pod pod-subpath-test-preprovisionedpv-rvhz no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-rvhz
Jun 15 03:28:42.060: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rvhz" in namespace "provisioning-2209"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":43,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-apps] StatefulSet
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 382 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support two pods which have the same volume definition
[90mtest/e2e/storage/testsuites/ephemeral.go:216[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:48.731: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:188
... skipping 220 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (block volmode)] provisioning
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should provision storage with pvc data source
[90mtest/e2e/storage/testsuites/provisioning.go:421[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":6,"skipped":47,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Flexvolumes
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 63 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Deployment should have a working scale subresource [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":61,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:50.286: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":15,"skipped":73,"failed":0}
[BeforeEach] [sig-node] Containers
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:28:46.212: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename containers
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
[32m• [SLOW TEST:5.880 seconds][0m
[sig-node] Containers
[90mtest/e2e/common/node/framework.go:23[0m
should use the image defaults if command and args are blank [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":73,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:52.108: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
Jun 15 03:28:31.489: INFO: PersistentVolumeClaim pvc-kkl5x found but phase is Pending instead of Bound.
Jun 15 03:28:33.635: INFO: PersistentVolumeClaim pvc-kkl5x found and phase=Bound (13.01379441s)
Jun 15 03:28:33.635: INFO: Waiting up to 3m0s for PersistentVolume local-bdzwq to have phase Bound
Jun 15 03:28:33.779: INFO: PersistentVolume local-bdzwq found and phase=Bound (144.142586ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4gsg
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:28:34.217: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4gsg" in namespace "provisioning-8283" to be "Succeeded or Failed"
Jun 15 03:28:34.361: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Pending", Reason="", readiness=false. Elapsed: 144.082711ms
Jun 15 03:28:36.506: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289219773s
Jun 15 03:28:38.650: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433563141s
Jun 15 03:28:40.796: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578936077s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:40.796: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg" satisfied condition "Succeeded or Failed"
Jun 15 03:28:40.940: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-4gsg container test-container-subpath-preprovisionedpv-4gsg: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:41.240: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4gsg to disappear
Jun 15 03:28:41.384: INFO: Pod pod-subpath-test-preprovisionedpv-4gsg no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4gsg
Jun 15 03:28:41.384: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4gsg" in namespace "provisioning-8283"
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-4gsg
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:28:41.676: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4gsg" in namespace "provisioning-8283" to be "Succeeded or Failed"
Jun 15 03:28:41.820: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Pending", Reason="", readiness=false. Elapsed: 143.910218ms
Jun 15 03:28:43.968: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29208324s
Jun 15 03:28:46.116: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439379103s
Jun 15 03:28:48.260: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.583866721s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:48.260: INFO: Pod "pod-subpath-test-preprovisionedpv-4gsg" satisfied condition "Succeeded or Failed"
Jun 15 03:28:48.405: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-4gsg container test-container-subpath-preprovisionedpv-4gsg: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:48.729: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4gsg to disappear
Jun 15 03:28:48.873: INFO: Pod pod-subpath-test-preprovisionedpv-4gsg no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-4gsg
Jun 15 03:28:48.873: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4gsg" in namespace "provisioning-8283"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:28:52.759: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/framework/framework.go:188
... skipping 38 lines ...
Jun 15 03:28:31.905: INFO: PersistentVolumeClaim pvc-p6k2s found but phase is Pending instead of Bound.
Jun 15 03:28:34.050: INFO: PersistentVolumeClaim pvc-p6k2s found and phase=Bound (6.576610139s)
Jun 15 03:28:34.050: INFO: Waiting up to 3m0s for PersistentVolume aws-6qqtr to have phase Bound
Jun 15 03:28:34.194: INFO: PersistentVolume aws-6qqtr found and phase=Bound (143.878687ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-fptj
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 15 03:28:34.625: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-fptj" in namespace "volume-7167" to be "Succeeded or Failed"
Jun 15 03:28:34.768: INFO: Pod "exec-volume-test-preprovisionedpv-fptj": Phase="Pending", Reason="", readiness=false. Elapsed: 143.190426ms
Jun 15 03:28:36.912: INFO: Pod "exec-volume-test-preprovisionedpv-fptj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287028792s
Jun 15 03:28:39.057: INFO: Pod "exec-volume-test-preprovisionedpv-fptj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43176214s
Jun 15 03:28:41.202: INFO: Pod "exec-volume-test-preprovisionedpv-fptj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577044729s
Jun 15 03:28:43.346: INFO: Pod "exec-volume-test-preprovisionedpv-fptj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721068188s
Jun 15 03:28:45.491: INFO: Pod "exec-volume-test-preprovisionedpv-fptj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.865921669s
Jun 15 03:28:47.639: INFO: Pod "exec-volume-test-preprovisionedpv-fptj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.01387066s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:47.639: INFO: Pod "exec-volume-test-preprovisionedpv-fptj" satisfied condition "Succeeded or Failed"
Jun 15 03:28:47.783: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod exec-volume-test-preprovisionedpv-fptj container exec-container-preprovisionedpv-fptj: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:28:48.076: INFO: Waiting for pod exec-volume-test-preprovisionedpv-fptj to disappear
Jun 15 03:28:48.219: INFO: Pod exec-volume-test-preprovisionedpv-fptj no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-fptj
Jun 15 03:28:48.219: INFO: Deleting pod "exec-volume-test-preprovisionedpv-fptj" in namespace "volume-7167"
[1mSTEP[0m: Deleting pv and pvc
Jun 15 03:28:48.362: INFO: Deleting PersistentVolumeClaim "pvc-p6k2s"
Jun 15 03:28:48.524: INFO: Deleting PersistentVolume "aws-6qqtr"
Jun 15 03:28:48.921: INFO: Couldn't delete PD "aws://sa-east-1a/vol-043d9b047cdf23dd4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-043d9b047cdf23dd4 is currently attached to i-0b28fcd2505512be6
status code: 400, request id: e8ce90c3-f542-4ba1-980d-1b0a312b343a
Jun 15 03:28:54.782: INFO: Couldn't delete PD "aws://sa-east-1a/vol-043d9b047cdf23dd4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-043d9b047cdf23dd4 is currently attached to i-0b28fcd2505512be6
status code: 400, request id: 8ef4b137-fe60-44c7-bada-ff5ed49507af
Jun 15 03:29:00.592: INFO: Successfully deleted PD "aws://sa-east-1a/vol-043d9b047cdf23dd4".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/framework/framework.go:188
Jun 15 03:29:00.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-7167" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":51,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:00.899: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
[90mtest/e2e/storage/testsuites/ephemeral.go:216[0m
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:31.882: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename services
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 112 lines ...
[32m• [SLOW TEST:90.261 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to update service type to NodePort listening on same port number but different protocols
[90mtest/e2e/network/service.go:1242[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":8,"skipped":48,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:02.160: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 88 lines ...
[32m• [SLOW TEST:30.467 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a configMap. [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Probing container
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 79 lines ...
[32m• [SLOW TEST:29.961 seconds][0m
[sig-network] Conntrack
[90mtest/e2e/network/common/framework.go:23[0m
should be able to preserve UDP traffic when server pod cycles for a NodePort service
[90mtest/e2e/network/conntrack.go:132[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":6,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:07.353: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:188
... skipping 76 lines ...
[32m• [SLOW TEST:22.901 seconds][0m
[sig-api-machinery] Watchers
[90mtest/e2e/apimachinery/framework.go:23[0m
should observe add, update, and delete watch notifications on configmaps [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:07.811: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 33 lines ...
[90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m
[36mDriver hostPath doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-node] Pods
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:05.325: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename pods
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:29:09.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "pods-8081" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 84 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (block volmode)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":10,"skipped":88,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:11.054: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 122 lines ...
[32m• [SLOW TEST:21.209 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
works for CRD preserving unknown fields at the schema root [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:11.546: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
test/e2e/framework/framework.go:188
... skipping 71 lines ...
Jun 15 03:29:00.898: INFO: PersistentVolumeClaim pvc-ghds7 found but phase is Pending instead of Bound.
Jun 15 03:29:03.043: INFO: PersistentVolumeClaim pvc-ghds7 found and phase=Bound (13.018374367s)
Jun 15 03:29:03.043: INFO: Waiting up to 3m0s for PersistentVolume local-p7jdz to have phase Bound
Jun 15 03:29:03.187: INFO: PersistentVolume local-p7jdz found and phase=Bound (144.145654ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-jr2v
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:29:03.620: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jr2v" in namespace "provisioning-7420" to be "Succeeded or Failed"
Jun 15 03:29:03.764: INFO: Pod "pod-subpath-test-preprovisionedpv-jr2v": Phase="Pending", Reason="", readiness=false. Elapsed: 144.193264ms
Jun 15 03:29:05.909: INFO: Pod "pod-subpath-test-preprovisionedpv-jr2v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288607068s
Jun 15 03:29:08.054: INFO: Pod "pod-subpath-test-preprovisionedpv-jr2v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433893342s
Jun 15 03:29:10.199: INFO: Pod "pod-subpath-test-preprovisionedpv-jr2v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578474838s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:10.199: INFO: Pod "pod-subpath-test-preprovisionedpv-jr2v" satisfied condition "Succeeded or Failed"
Jun 15 03:29:10.343: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-jr2v container test-container-subpath-preprovisionedpv-jr2v: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:10.637: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jr2v to disappear
Jun 15 03:29:10.781: INFO: Pod pod-subpath-test-preprovisionedpv-jr2v no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-jr2v
Jun 15 03:29:10.781: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jr2v" in namespace "provisioning-7420"
... skipping 30 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly directory specified in the volumeMount
[90mtest/e2e/storage/testsuites/subpath.go:367[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":33,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:14.709: INFO: Only supported for providers [azure] (not aws)
... skipping 112 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
test/e2e/common/storage/host_path.go:39
[It] should support r/w [NodeConformance]
test/e2e/common/storage/host_path.go:67
[1mSTEP[0m: Creating a pod to test hostPath r/w
Jun 15 03:29:08.547: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1559" to be "Succeeded or Failed"
Jun 15 03:29:08.691: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 143.858721ms
Jun 15 03:29:10.836: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288644227s
Jun 15 03:29:12.980: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43363715s
Jun 15 03:29:15.126: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578733339s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:15.126: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 15 03:29:15.270: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-host-path-test container test-container-2: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:15.563: INFO: Waiting for pod pod-host-path-test to disappear
Jun 15 03:29:15.707: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
test/e2e/framework/framework.go:188
... skipping 90 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":49,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:16.120: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 182 lines ...
[90mtest/e2e/common/node/runtime.go:43[0m
on terminated container
[90mtest/e2e/common/node/runtime.go:136[0m
should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":112,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:16.675: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 49 lines ...
Jun 15 03:29:02.011: INFO: PersistentVolumeClaim pvc-8hdmq found but phase is Pending instead of Bound.
Jun 15 03:29:04.190: INFO: PersistentVolumeClaim pvc-8hdmq found and phase=Bound (4.467944997s)
Jun 15 03:29:04.190: INFO: Waiting up to 3m0s for PersistentVolume local-nlxmn to have phase Bound
Jun 15 03:29:04.334: INFO: PersistentVolume local-nlxmn found and phase=Bound (143.837643ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-5k9t
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:29:04.768: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5k9t" in namespace "provisioning-6166" to be "Succeeded or Failed"
Jun 15 03:29:04.912: INFO: Pod "pod-subpath-test-preprovisionedpv-5k9t": Phase="Pending", Reason="", readiness=false. Elapsed: 143.89908ms
Jun 15 03:29:07.056: INFO: Pod "pod-subpath-test-preprovisionedpv-5k9t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288432492s
Jun 15 03:29:09.202: INFO: Pod "pod-subpath-test-preprovisionedpv-5k9t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433947547s
Jun 15 03:29:11.348: INFO: Pod "pod-subpath-test-preprovisionedpv-5k9t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580137445s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:11.348: INFO: Pod "pod-subpath-test-preprovisionedpv-5k9t" satisfied condition "Succeeded or Failed"
Jun 15 03:29:11.492: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-preprovisionedpv-5k9t container test-container-volume-preprovisionedpv-5k9t: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:11.791: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5k9t to disappear
Jun 15 03:29:11.935: INFO: Pod pod-subpath-test-preprovisionedpv-5k9t no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-5k9t
Jun 15 03:29:11.936: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5k9t" in namespace "provisioning-6166"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":61,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:16.758: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 76 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:29:17.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "request-timeout-8913" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":12,"skipped":122,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:18.185: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/framework/framework.go:188
... skipping 44 lines ...
[1mSTEP[0m: Building a namespace api object, basename downward-api
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
test/e2e/common/node/downwardapi.go:112
[1mSTEP[0m: Creating a pod to test downward api env vars
Jun 15 03:29:10.841: INFO: Waiting up to 5m0s for pod "downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73" in namespace "downward-api-8874" to be "Succeeded or Failed"
Jun 15 03:29:10.985: INFO: Pod "downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73": Phase="Pending", Reason="", readiness=false. Elapsed: 143.455232ms
Jun 15 03:29:13.130: INFO: Pod "downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288907332s
Jun 15 03:29:15.275: INFO: Pod "downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433312623s
Jun 15 03:29:17.418: INFO: Pod "downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577102858s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:17.419: INFO: Pod "downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73" satisfied condition "Succeeded or Failed"
Jun 15 03:29:17.562: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:17.859: INFO: Waiting for pod downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73 to disappear
Jun 15 03:29:18.009: INFO: Pod downward-api-5b5bac01-4e61-4a6d-8da3-68f45c574d73 no longer exists
[AfterEach] [sig-node] Downward API
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.681 seconds][0m
[sig-node] Downward API
[90mtest/e2e/common/node/framework.go:23[0m
should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
[90mtest/e2e/common/node/downwardapi.go:112[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":8,"skipped":40,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:27:47.044: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename csi-mock-volumes
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
Jun 15 03:27:53.180: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7248
Jun 15 03:27:53.325: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7248
Jun 15 03:27:53.470: INFO: creating *v1.StatefulSet: csi-mock-volumes-7248-5231/csi-mockplugin
Jun 15 03:27:53.616: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7248
Jun 15 03:27:53.767: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7248"
Jun 15 03:27:53.911: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7248 to register on node i-0a5092cc559ae3bff
I0615 03:27:58.558105 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0615 03:27:58.703706 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7248","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0615 03:27:58.848930 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0615 03:27:58.994122 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0615 03:27:59.314311 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7248","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0615 03:28:00.101483 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7248"},"Error":"","FullError":null}
[1mSTEP[0m: Creating pod
Jun 15 03:28:04.290: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0615 03:28:04.613285 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0615 03:28:05.763315 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8"}}},"Error":"","FullError":null}
I0615 03:28:06.783744 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:28:06.928647 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:28:07.073113 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 15 03:28:07.219: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:28:07.219: INFO: ExecWithOptions: Clientset creation
Jun 15 03:28:07.220: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-7248-5231/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-7248%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-7248%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
I0615 03:28:08.154801 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-7248/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8","storage.kubernetes.io/csiProvisionerIdentity":"1655263679070-8081-csi-mock-csi-mock-volumes-7248"}},"Response":{},"Error":"","FullError":null}
I0615 03:28:08.299855 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:28:08.445213 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:28:08.590827 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 15 03:28:08.737: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:28:08.738: INFO: ExecWithOptions: Clientset creation
Jun 15 03:28:08.738: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-7248-5231/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F5e88a925-14e8-441f-b43d-1e9ab87ca687%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F5e88a925-14e8-441f-b43d-1e9ab87ca687%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 15 03:28:09.678: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:28:09.678: INFO: ExecWithOptions: Clientset creation
Jun 15 03:28:09.679: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-7248-5231/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F5e88a925-14e8-441f-b43d-1e9ab87ca687%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F5e88a925-14e8-441f-b43d-1e9ab87ca687%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
Jun 15 03:28:10.625: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:28:10.626: INFO: ExecWithOptions: Clientset creation
Jun 15 03:28:10.626: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-7248-5231/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F5e88a925-14e8-441f-b43d-1e9ab87ca687%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0615 03:28:11.553874 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-7248/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/5e88a925-14e8-441f-b43d-1e9ab87ca687/volumes/kubernetes.io~csi/pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8","storage.kubernetes.io/csiProvisionerIdentity":"1655263679070-8081-csi-mock-csi-mock-volumes-7248"}},"Response":{},"Error":"","FullError":null}
Jun 15 03:28:12.872: INFO: Deleting pod "pvc-volume-tester-ldqjm" in namespace "csi-mock-volumes-7248"
Jun 15 03:28:13.022: INFO: Wait up to 5m0s for pod "pvc-volume-tester-ldqjm" to be fully deleted
Jun 15 03:28:14.661: INFO: >>> kubeConfig: /root/.kube/config
Jun 15 03:28:14.662: INFO: ExecWithOptions: Clientset creation
Jun 15 03:28:14.662: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-7248-5231/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F5e88a925-14e8-441f-b43d-1e9ab87ca687%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
I0615 03:28:15.609167 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5e88a925-14e8-441f-b43d-1e9ab87ca687/volumes/kubernetes.io~csi/pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8/mount"},"Response":{},"Error":"","FullError":null}
I0615 03:28:15.764403 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0615 03:28:15.910346 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-7248/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
I0615 03:28:17.480741 6569 csi.go:444] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
[1mSTEP[0m: Checking PVC events
Jun 15 03:28:18.462: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hmbdg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7248", SelfLink:"", UID:"969e9ce7-1eac-47b1-a70b-541c667f0dd8", ResourceVersion:"8937", Generation:0, CreationTimestamp:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025f7b18), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000897e50), VolumeMode:(*v1.PersistentVolumeMode)(0xc000897e70), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 15 03:28:18.463: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hmbdg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7248", SelfLink:"", UID:"969e9ce7-1eac-47b1-a70b-541c667f0dd8", ResourceVersion:"8940", Generation:0, CreationTimestamp:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"i-0a5092cc559ae3bff"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0026f49a8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0026f49d8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001c469a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001c469c0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 15 03:28:18.463: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hmbdg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7248", SelfLink:"", UID:"969e9ce7-1eac-47b1-a70b-541c667f0dd8", ResourceVersion:"8941", Generation:0, CreationTimestamp:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7248", "volume.kubernetes.io/selected-node":"i-0a5092cc559ae3bff", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7248"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002596360), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002596390), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025963c0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002564a90), VolumeMode:(*v1.PersistentVolumeMode)(0xc002564aa0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 15 03:28:18.463: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hmbdg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7248", SelfLink:"", UID:"969e9ce7-1eac-47b1-a70b-541c667f0dd8", ResourceVersion:"8976", Generation:0, CreationTimestamp:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7248", "volume.kubernetes.io/selected-node":"i-0a5092cc559ae3bff", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7248"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002596408), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002596438), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 5, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002596468), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8", StorageClassName:(*string)(0xc002564ad0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002564ae0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
Jun 15 03:28:18.463: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hmbdg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7248", SelfLink:"", UID:"969e9ce7-1eac-47b1-a70b-541c667f0dd8", ResourceVersion:"8977", Generation:0, CreationTimestamp:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7248", "volume.kubernetes.io/selected-node":"i-0a5092cc559ae3bff", "volume.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7248"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002596540), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025966a8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 5, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025966f0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 15, 3, 28, 5, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002596738), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-969e9ce7-1eac-47b1-a70b-541c667f0dd8", StorageClassName:(*string)(0xc002564b20), VolumeMode:(*v1.PersistentVolumeMode)(0xc002564b30), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil), AllocatedResources:v1.ResourceList(nil), ResizeStatus:(*v1.PersistentVolumeClaimResizeStatus)(nil)}}
... skipping 49 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
storage capacity
[90mtest/e2e/storage/csi_mock_volume.go:1100[0m
exhausted, late binding, no topology
[90mtest/e2e/storage/csi_mock_volume.go:1158[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":9,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:21.069: INFO: Driver hostPath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/framework/framework.go:188
... skipping 85 lines ...
[32m• [SLOW TEST:34.103 seconds][0m
[sig-network] EndpointSlice
[90mtest/e2e/network/common/framework.go:23[0m
should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":11,"skipped":52,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:21.395: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should create read/write inline ephemeral volume
[90mtest/e2e/storage/testsuites/ephemeral.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":6,"skipped":45,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:24.838: INFO: Only supported for providers [azure] (not aws)
... skipping 153 lines ...
[1mSTEP[0m: Building a namespace api object, basename volume
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should store data
test/e2e/storage/testsuites/volumes.go:161
Jun 15 03:28:51.328: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 15 03:28:51.619: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-7563" in namespace "volume-7563" to be "Succeeded or Failed"
Jun 15 03:28:51.764: INFO: Pod "hostpath-symlink-prep-volume-7563": Phase="Pending", Reason="", readiness=false. Elapsed: 144.361975ms
Jun 15 03:28:53.909: INFO: Pod "hostpath-symlink-prep-volume-7563": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289783734s
Jun 15 03:28:56.055: INFO: Pod "hostpath-symlink-prep-volume-7563": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435224403s
[1mSTEP[0m: Saw pod success
Jun 15 03:28:56.055: INFO: Pod "hostpath-symlink-prep-volume-7563" satisfied condition "Succeeded or Failed"
Jun 15 03:28:56.055: INFO: Deleting pod "hostpath-symlink-prep-volume-7563" in namespace "volume-7563"
Jun 15 03:28:56.204: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-7563" to be fully deleted
Jun 15 03:28:56.348: INFO: Creating resource for inline volume
[1mSTEP[0m: starting hostpathsymlink-injector
[1mSTEP[0m: Writing text file contents in the container.
Jun 15 03:29:00.782: INFO: Running '/logs/artifacts/59eecc33-ec59-11ec-8414-26e9cf6cfe64/kubectl --server=https://api.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-7563 exec hostpathsymlink-injector --namespace=volume-7563 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-7563' > /opt/0/index.html'
... skipping 45 lines ...
[1mSTEP[0m: Deleting pod hostpathsymlink-client in namespace volume-7563
Jun 15 03:29:19.855: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 15 03:29:19.999: INFO: Pod hostpathsymlink-client still exists
Jun 15 03:29:21.999: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 15 03:29:22.143: INFO: Pod hostpathsymlink-client no longer exists
[1mSTEP[0m: cleaning the environment after hostpathsymlink
Jun 15 03:29:22.291: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-7563" in namespace "volume-7563" to be "Succeeded or Failed"
Jun 15 03:29:22.435: INFO: Pod "hostpath-symlink-prep-volume-7563": Phase="Pending", Reason="", readiness=false. Elapsed: 144.203177ms
Jun 15 03:29:24.581: INFO: Pod "hostpath-symlink-prep-volume-7563": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29003692s
Jun 15 03:29:26.725: INFO: Pod "hostpath-symlink-prep-volume-7563": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434480346s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:26.725: INFO: Pod "hostpath-symlink-prep-volume-7563" satisfied condition "Succeeded or Failed"
Jun 15 03:29:26.725: INFO: Deleting pod "hostpath-symlink-prep-volume-7563" in namespace "volume-7563"
Jun 15 03:29:26.876: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-7563" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/framework/framework.go:188
Jun 15 03:29:27.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volume-7563" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should store data
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":11,"skipped":68,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:27.340: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node","total":-1,"completed":7,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:26.074: INFO: >>> kubeConfig: /root/.kube/config
... skipping 38 lines ...
test/e2e/storage/subpath.go:40
[1mSTEP[0m: Setting up data
[It] should support subpaths with secret pod [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating pod pod-subpath-test-secret-4pfv
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 15 03:29:02.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4pfv" in namespace "subpath-9959" to be "Succeeded or Failed"
Jun 15 03:29:02.525: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Pending", Reason="", readiness=false. Elapsed: 143.883105ms
Jun 15 03:29:04.670: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 2.288922589s
Jun 15 03:29:06.814: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 4.433079907s
Jun 15 03:29:08.958: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 6.577520025s
Jun 15 03:29:11.103: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 8.722200361s
Jun 15 03:29:13.248: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 10.867777881s
... skipping 2 lines ...
Jun 15 03:29:19.685: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 17.304786681s
Jun 15 03:29:21.830: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 19.449676087s
Jun 15 03:29:23.976: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=true. Elapsed: 21.595172738s
Jun 15 03:29:26.120: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Running", Reason="", readiness=false. Elapsed: 23.739744147s
Jun 15 03:29:28.264: INFO: Pod "pod-subpath-test-secret-4pfv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.883674818s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:28.264: INFO: Pod "pod-subpath-test-secret-4pfv" satisfied condition "Succeeded or Failed"
Jun 15 03:29:28.408: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-secret-4pfv container test-container-subpath-secret-4pfv: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:28.714: INFO: Waiting for pod pod-subpath-test-secret-4pfv to disappear
Jun 15 03:29:28.859: INFO: Pod pod-subpath-test-secret-4pfv no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-secret-4pfv
Jun 15 03:29:28.860: INFO: Deleting pod "pod-subpath-test-secret-4pfv" in namespace "subpath-9959"
... skipping 8 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
Atomic writer volumes
[90mtest/e2e/storage/subpath.go:36[0m
should support subpaths with secret pod [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 145 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and write from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:240[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":12,"skipped":57,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:36.419: INFO: Only supported for providers [vsphere] (not aws)
... skipping 115 lines ...
[90mtest/e2e/kubectl/kubectl.go:380[0m
should return command exit codes
[90mtest/e2e/kubectl/kubectl.go:500[0m
running a failing command
[90mtest/e2e/kubectl/kubectl.go:520[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":10,"skipped":72,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:36.729: INFO: Driver local doesn't support ext3 -- skipping
... skipping 43 lines ...
[32m• [SLOW TEST:68.424 seconds][0m
[sig-node] Probing container
[90mtest/e2e/common/node/framework.go:23[0m
should be restarted by liveness probe after startup probe enables it
[90mtest/e2e/common/node/container_probe.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":10,"skipped":112,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":7,"skipped":74,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:16.006: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename webhook
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 182 lines ...
[32m• [SLOW TEST:10.244 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should mutate custom resource [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":8,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:37.783: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
test/e2e/framework/framework.go:188
... skipping 109 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":60,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:39.462: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
Jun 15 03:28:38.836: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 15 03:28:38.982: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathnjgfx] to have phase Bound
Jun 15 03:28:39.126: INFO: PersistentVolumeClaim csi-hostpathnjgfx found but phase is Pending instead of Bound.
Jun 15 03:28:41.270: INFO: PersistentVolumeClaim csi-hostpathnjgfx found and phase=Bound (2.288108032s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-q722
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 15 03:28:41.706: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-q722" in namespace "provisioning-4567" to be "Succeeded or Failed"
Jun 15 03:28:41.850: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Pending", Reason="", readiness=false. Elapsed: 143.868078ms
Jun 15 03:28:43.995: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288528971s
Jun 15 03:28:46.141: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435251353s
Jun 15 03:28:48.287: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580330949s
Jun 15 03:28:50.433: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726974241s
Jun 15 03:28:52.577: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Pending", Reason="", readiness=false. Elapsed: 10.871122633s
... skipping 8 lines ...
Jun 15 03:29:11.883: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Running", Reason="", readiness=true. Elapsed: 30.176764298s
Jun 15 03:29:14.027: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Running", Reason="", readiness=true. Elapsed: 32.321253529s
Jun 15 03:29:16.173: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Running", Reason="", readiness=true. Elapsed: 34.46632322s
Jun 15 03:29:18.325: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Running", Reason="", readiness=false. Elapsed: 36.61849152s
Jun 15 03:29:20.474: INFO: Pod "pod-subpath-test-dynamicpv-q722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.768263565s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:20.475: INFO: Pod "pod-subpath-test-dynamicpv-q722" satisfied condition "Succeeded or Failed"
Jun 15 03:29:20.619: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-dynamicpv-q722 container test-container-subpath-dynamicpv-q722: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:20.914: INFO: Waiting for pod pod-subpath-test-dynamicpv-q722 to disappear
Jun 15 03:29:21.059: INFO: Pod pod-subpath-test-dynamicpv-q722 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-q722
Jun 15 03:29:21.060: INFO: Deleting pod "pod-subpath-test-dynamicpv-q722" in namespace "provisioning-4567"
... skipping 60 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":23,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":68,"failed":0}
[BeforeEach] [sig-node] Kubelet
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:34.385: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubelet-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when scheduling a read only busybox container
[90mtest/e2e/common/node/kubelet.go:190[0m
should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":68,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 44 lines ...
[1mSTEP[0m: Deleting pod hostexec-i-0a5092cc559ae3bff-qdxzp in namespace volumemode-9493
Jun 15 03:29:29.600: INFO: Deleting pod "pod-e2ec263c-affc-40ec-b78f-5fea4e60d506" in namespace "volumemode-9493"
Jun 15 03:29:29.745: INFO: Wait up to 5m0s for pod "pod-e2ec263c-affc-40ec-b78f-5fea4e60d506" to be fully deleted
[1mSTEP[0m: Deleting pv and pvc
Jun 15 03:29:34.055: INFO: Deleting PersistentVolumeClaim "pvc-ltddp"
Jun 15 03:29:34.216: INFO: Deleting PersistentVolume "aws-9pzdm"
Jun 15 03:29:34.770: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0ed44a290fd5b2ec3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ed44a290fd5b2ec3 is currently attached to i-0a5092cc559ae3bff
status code: 400, request id: dba53253-b6de-4359-924a-5b2cacee41ce
Jun 15 03:29:40.538: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0ed44a290fd5b2ec3".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/framework/framework.go:188
Jun 15 03:29:40.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "volumemode-9493" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (block volmode)] volumeMode
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not mount / map unused volumes in a pod [LinuxOnly]
[90mtest/e2e/storage/testsuites/volumemode.go:354[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":63,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:40.886: INFO: Only supported for providers [azure] (not aws)
... skipping 269 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] provisioning
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should provision storage with pvc data source
[90mtest/e2e/storage/testsuites/provisioning.go:421[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":5,"skipped":41,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 50 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should be able to unmount after the subpath directory is deleted [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:447[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":59,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:44.676: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 169 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support two pods which have the same volume definition
[90mtest/e2e/storage/testsuites/ephemeral.go:216[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition","total":-1,"completed":9,"skipped":104,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:45.363: INFO: Only supported for providers [openstack] (not aws)
... skipping 24 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium
Jun 15 03:29:38.241: INFO: Waiting up to 5m0s for pod "pod-af34e5ef-0d51-45d3-80d2-0949c31a771b" in namespace "emptydir-691" to be "Succeeded or Failed"
Jun 15 03:29:38.385: INFO: Pod "pod-af34e5ef-0d51-45d3-80d2-0949c31a771b": Phase="Pending", Reason="", readiness=false. Elapsed: 143.527754ms
Jun 15 03:29:40.531: INFO: Pod "pod-af34e5ef-0d51-45d3-80d2-0949c31a771b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289263349s
Jun 15 03:29:42.677: INFO: Pod "pod-af34e5ef-0d51-45d3-80d2-0949c31a771b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435575801s
Jun 15 03:29:44.822: INFO: Pod "pod-af34e5ef-0d51-45d3-80d2-0949c31a771b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580114682s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:44.822: INFO: Pod "pod-af34e5ef-0d51-45d3-80d2-0949c31a771b" satisfied condition "Succeeded or Failed"
Jun 15 03:29:44.966: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-af34e5ef-0d51-45d3-80d2-0949c31a771b container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:45.259: INFO: Waiting for pod pod-af34e5ef-0d51-45d3-80d2-0949c31a771b to disappear
Jun 15 03:29:45.403: INFO: Pod pod-af34e5ef-0d51-45d3-80d2-0949c31a771b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.640 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":126,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:45.710: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
[32m• [SLOW TEST:8.416 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should allow pods to hairpin back to themselves through services
[90mtest/e2e/network/service.go:1014[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":9,"skipped":62,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 25 lines ...
Jun 15 03:29:31.483: INFO: PersistentVolumeClaim pvc-z4rm9 found but phase is Pending instead of Bound.
Jun 15 03:29:33.629: INFO: PersistentVolumeClaim pvc-z4rm9 found and phase=Bound (15.159350516s)
Jun 15 03:29:33.629: INFO: Waiting up to 3m0s for PersistentVolume local-55c8q to have phase Bound
Jun 15 03:29:33.773: INFO: PersistentVolume local-55c8q found and phase=Bound (143.750834ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-kr7r
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:29:34.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kr7r" in namespace "provisioning-6119" to be "Succeeded or Failed"
Jun 15 03:29:34.372: INFO: Pod "pod-subpath-test-preprovisionedpv-kr7r": Phase="Pending", Reason="", readiness=false. Elapsed: 151.632341ms
Jun 15 03:29:36.516: INFO: Pod "pod-subpath-test-preprovisionedpv-kr7r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295829076s
Jun 15 03:29:38.666: INFO: Pod "pod-subpath-test-preprovisionedpv-kr7r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445637569s
Jun 15 03:29:40.810: INFO: Pod "pod-subpath-test-preprovisionedpv-kr7r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.590096047s
Jun 15 03:29:42.955: INFO: Pod "pod-subpath-test-preprovisionedpv-kr7r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.735224086s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:42.955: INFO: Pod "pod-subpath-test-preprovisionedpv-kr7r" satisfied condition "Succeeded or Failed"
Jun 15 03:29:43.099: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-preprovisionedpv-kr7r container test-container-subpath-preprovisionedpv-kr7r: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:43.393: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kr7r to disappear
Jun 15 03:29:43.537: INFO: Pod pod-subpath-test-preprovisionedpv-kr7r no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-kr7r
Jun 15 03:29:43.537: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kr7r" in namespace "provisioning-6119"
... skipping 26 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":65,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":78,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:28:46.297: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename crd-publish-openapi
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
[32m• [SLOW TEST:60.206 seconds][0m
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
removes definition from spec when one version gets changed to not be served [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":8,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:46.512: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:188
... skipping 21 lines ...
[1mSTEP[0m: Building a namespace api object, basename var-expansion
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test substitution in volume subpath
Jun 15 03:29:41.623: INFO: Waiting up to 5m0s for pod "var-expansion-10572238-923e-4509-ad82-608ee2523b71" in namespace "var-expansion-3599" to be "Succeeded or Failed"
Jun 15 03:29:41.767: INFO: Pod "var-expansion-10572238-923e-4509-ad82-608ee2523b71": Phase="Pending", Reason="", readiness=false. Elapsed: 144.5507ms
Jun 15 03:29:43.914: INFO: Pod "var-expansion-10572238-923e-4509-ad82-608ee2523b71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290999295s
Jun 15 03:29:46.060: INFO: Pod "var-expansion-10572238-923e-4509-ad82-608ee2523b71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437803954s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:46.061: INFO: Pod "var-expansion-10572238-923e-4509-ad82-608ee2523b71" satisfied condition "Succeeded or Failed"
Jun 15 03:29:46.205: INFO: Trying to get logs from node i-05fe3937684c9d649 pod var-expansion-10572238-923e-4509-ad82-608ee2523b71 container dapi-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:46.501: INFO: Waiting for pod var-expansion-10572238-923e-4509-ad82-608ee2523b71 to disappear
Jun 15 03:29:46.647: INFO: Pod var-expansion-10572238-923e-4509-ad82-608ee2523b71 no longer exists
[AfterEach] [sig-node] Variable Expansion
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.485 seconds][0m
[sig-node] Variable Expansion
[90mtest/e2e/common/node/framework.go:23[0m
should allow substituting values in a volume subpath [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Secrets
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating secret with name secret-test-c5fd8df0-f63b-4bfe-99f5-bf90a6626177
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 15 03:29:42.277: INFO: Waiting up to 5m0s for pod "pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1" in namespace "secrets-5919" to be "Succeeded or Failed"
Jun 15 03:29:42.422: INFO: Pod "pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 144.677199ms
Jun 15 03:29:44.567: INFO: Pod "pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289393297s
Jun 15 03:29:46.712: INFO: Pod "pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434644398s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:46.712: INFO: Pod "pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1" satisfied condition "Succeeded or Failed"
Jun 15 03:29:46.856: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1 container secret-env-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:47.154: INFO: Waiting for pod pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1 to disappear
Jun 15 03:29:47.298: INFO: Pod pod-secrets-d7ec9f18-f4ef-4857-9dde-1bc372afcbb1 no longer exists
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:188
... skipping 45 lines ...
[32m• [SLOW TEST:12.340 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
patching/updating a mutating webhook should work [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":6,"skipped":24,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:52.605: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 172 lines ...
[32m• [SLOW TEST:66.320 seconds][0m
[sig-apps] DisruptionController
[90mtest/e2e/apps/framework.go:23[0m
should observe that the PodDisruptionBudget status is not updated for unmanaged pods
[90mtest/e2e/apps/disruption.go:194[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods","total":-1,"completed":5,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:55.082: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/framework/framework.go:188
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: tmpfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [sig-network] Conntrack
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:18.377: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename conntrack
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 47 lines ...
[32m• [SLOW TEST:36.973 seconds][0m
[sig-network] Conntrack
[90mtest/e2e/network/common/framework.go:23[0m
should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
[90mtest/e2e/network/conntrack.go:208[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":4,"skipped":6,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:55.375: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
[90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":89,"failed":0}
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:47.604: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename dns
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
[32m• [SLOW TEST:7.811 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should support configurable pod DNS nameservers [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":9,"skipped":89,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:37.115: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 47 lines ...
[90mtest/e2e/kubectl/kubectl.go:380[0m
should return command exit codes
[90mtest/e2e/kubectl/kubectl.go:500[0m
execing into a container with a successful command
[90mtest/e2e/kubectl/kubectl.go:501[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":9,"skipped":74,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:56.385: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 310 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
test/e2e/common/node/security_context.go:48
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
Jun 15 03:29:47.681: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-442aec5a-c669-4628-875a-eb0779060266" in namespace "security-context-test-6786" to be "Succeeded or Failed"
Jun 15 03:29:47.824: INFO: Pod "alpine-nnp-false-442aec5a-c669-4628-875a-eb0779060266": Phase="Pending", Reason="", readiness=false. Elapsed: 143.098113ms
Jun 15 03:29:49.969: INFO: Pod "alpine-nnp-false-442aec5a-c669-4628-875a-eb0779060266": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287853406s
Jun 15 03:29:52.115: INFO: Pod "alpine-nnp-false-442aec5a-c669-4628-875a-eb0779060266": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433504354s
Jun 15 03:29:54.260: INFO: Pod "alpine-nnp-false-442aec5a-c669-4628-875a-eb0779060266": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578720648s
Jun 15 03:29:56.405: INFO: Pod "alpine-nnp-false-442aec5a-c669-4628-875a-eb0779060266": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.723466119s
Jun 15 03:29:56.405: INFO: Pod "alpine-nnp-false-442aec5a-c669-4628-875a-eb0779060266" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:188
Jun 15 03:29:56.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "security-context-test-6786" for this suite.
... skipping 2 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when creating containers with AllowPrivilegeEscalation
[90mtest/e2e/common/node/security_context.go:298[0m
should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":81,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:56.858: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 82 lines ...
Jun 15 03:29:24.443: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:24.586: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:24.732: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:24.877: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:25.021: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:25.165: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:25.165: INFO: Lookups using dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local]
Jun 15 03:29:30.317: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:30.461: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:30.605: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:30.753: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:30.897: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:31.041: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:31.186: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:31.330: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:31.331: INFO: Lookups using dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local]
Jun 15 03:29:35.312: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:35.457: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:35.601: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:35.745: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:35.888: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:36.032: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:36.176: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:36.320: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:36.320: INFO: Lookups using dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local]
Jun 15 03:29:40.312: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:40.457: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:40.601: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:40.745: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:40.889: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:41.033: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:41.178: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:41.323: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:41.323: INFO: Lookups using dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local]
Jun 15 03:29:45.313: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:45.459: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:45.603: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:45.746: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:45.891: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:46.035: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:46.179: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:46.322: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:46.322: INFO: Lookups using dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local jessie_udp@dns-test-service-2.dns-6888.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6888.svc.cluster.local]
Jun 15 03:29:50.313: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:50.456: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:50.604: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:50.747: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local from pod dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431: the server could not find the requested resource (get pods dns-test-16b4ee07-8f38-4173-94da-39fc71c28431)
Jun 15 03:29:51.326: INFO: Lookups using dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6888.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6888.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6888.svc.cluster.local]
Jun 15 03:29:56.332: INFO: DNS probes using dns-6888/dns-test-16b4ee07-8f38-4173-94da-39fc71c28431 succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 93 lines ...
[32m• [SLOW TEST:19.356 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should verify ResourceQuota with terminating scopes. [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":10,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:29:58.865: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:188
... skipping 282 lines ...
[32m• [SLOW TEST:25.897 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should be able to schedule after more than 100 missed schedule
[90mtest/e2e/apps/cronjob.go:191[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":13,"skipped":74,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:02.417: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-disk]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1576
[90m------------------------------[0m
... skipping 7 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating secret with name secret-test-map-034f1758-0779-44aa-b46d-9db15e1008ba
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 15 03:29:58.009: INFO: Waiting up to 5m0s for pod "pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd" in namespace "secrets-2318" to be "Succeeded or Failed"
Jun 15 03:29:58.153: INFO: Pod "pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd": Phase="Pending", Reason="", readiness=false. Elapsed: 143.511814ms
Jun 15 03:30:00.299: INFO: Pod "pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289521302s
Jun 15 03:30:02.444: INFO: Pod "pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434828516s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:02.444: INFO: Pod "pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd" satisfied condition "Succeeded or Failed"
Jun 15 03:30:02.588: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd container secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:02.885: INFO: Waiting for pod pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd to disappear
Jun 15 03:30:03.029: INFO: Pod pod-secrets-806fe7d4-41bf-4431-a9a7-c268e87995cd no longer exists
[AfterEach] [sig-storage] Secrets
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.604 seconds][0m
[sig-storage] Secrets
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":144,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 2 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating projection with secret that has name projected-secret-test-9ce8244c-b5ee-4a59-9b7e-d2c3603fcd20
[1mSTEP[0m: Creating a pod to test consume secrets
Jun 15 03:29:56.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a" in namespace "projected-5807" to be "Succeeded or Failed"
Jun 15 03:29:56.823: INFO: Pod "pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 143.454301ms
Jun 15 03:29:58.968: INFO: Pod "pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287634651s
Jun 15 03:30:01.111: INFO: Pod "pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a": Phase="Running", Reason="", readiness=false. Elapsed: 4.431225347s
Jun 15 03:30:03.255: INFO: Pod "pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.575595918s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:03.256: INFO: Pod "pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a" satisfied condition "Succeeded or Failed"
Jun 15 03:30:03.399: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a container projected-secret-volume-test: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:03.694: INFO: Waiting for pod pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a to disappear
Jun 15 03:30:03.837: INFO: Pod pod-projected-secrets-b8303757-f1db-4a19-bd60-2dc52cac0d8a no longer exists
[AfterEach] [sig-storage] Projected secret
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.747 seconds][0m
[sig-storage] Projected secret
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:04.141: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/framework/framework.go:188
... skipping 11 lines ...
[36mDriver local doesn't support GenericEphemeralVolume -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":13,"skipped":129,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:56.939: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory request [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:29:58.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807" in namespace "projected-254" to be "Succeeded or Failed"
Jun 15 03:29:58.231: INFO: Pod "downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807": Phase="Pending", Reason="", readiness=false. Elapsed: 143.172966ms
Jun 15 03:30:00.375: INFO: Pod "downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287303648s
Jun 15 03:30:02.520: INFO: Pod "downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431748936s
Jun 15 03:30:04.665: INFO: Pod "downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576471243s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:04.665: INFO: Pod "downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807" satisfied condition "Succeeded or Failed"
Jun 15 03:30:04.808: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:05.103: INFO: Waiting for pod downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807 to disappear
Jun 15 03:30:05.248: INFO: Pod downwardapi-volume-c205aae9-1945-46a3-b987-af97e81da807 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.599 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's memory request [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":129,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:05.554: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 24 lines ...
[1mSTEP[0m: Building a namespace api object, basename provisioning
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support non-existent path
test/e2e/storage/testsuites/subpath.go:196
Jun 15 03:29:47.482: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 15 03:29:47.772: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3507" in namespace "provisioning-3507" to be "Succeeded or Failed"
Jun 15 03:29:47.916: INFO: Pod "hostpath-symlink-prep-provisioning-3507": Phase="Pending", Reason="", readiness=false. Elapsed: 143.649892ms
Jun 15 03:29:50.062: INFO: Pod "hostpath-symlink-prep-provisioning-3507": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28938325s
Jun 15 03:29:52.206: INFO: Pod "hostpath-symlink-prep-provisioning-3507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433777752s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:52.206: INFO: Pod "hostpath-symlink-prep-provisioning-3507" satisfied condition "Succeeded or Failed"
Jun 15 03:29:52.206: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3507" in namespace "provisioning-3507"
Jun 15 03:29:52.355: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3507" to be fully deleted
Jun 15 03:29:52.500: INFO: Creating resource for inline volume
[1mSTEP[0m: Creating pod pod-subpath-test-inlinevolume-8m8p
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:29:52.647: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8m8p" in namespace "provisioning-3507" to be "Succeeded or Failed"
Jun 15 03:29:52.791: INFO: Pod "pod-subpath-test-inlinevolume-8m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 143.807273ms
Jun 15 03:29:54.935: INFO: Pod "pod-subpath-test-inlinevolume-8m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287761754s
Jun 15 03:29:57.080: INFO: Pod "pod-subpath-test-inlinevolume-8m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4326383s
Jun 15 03:29:59.225: INFO: Pod "pod-subpath-test-inlinevolume-8m8p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578256875s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:59.225: INFO: Pod "pod-subpath-test-inlinevolume-8m8p" satisfied condition "Succeeded or Failed"
Jun 15 03:29:59.369: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-inlinevolume-8m8p container test-container-volume-inlinevolume-8m8p: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:59.665: INFO: Waiting for pod pod-subpath-test-inlinevolume-8m8p to disappear
Jun 15 03:29:59.808: INFO: Pod pod-subpath-test-inlinevolume-8m8p no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-inlinevolume-8m8p
Jun 15 03:29:59.808: INFO: Deleting pod "pod-subpath-test-inlinevolume-8m8p" in namespace "provisioning-3507"
[1mSTEP[0m: Deleting pod
Jun 15 03:29:59.953: INFO: Deleting pod "pod-subpath-test-inlinevolume-8m8p" in namespace "provisioning-3507"
Jun 15 03:30:00.246: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3507" in namespace "provisioning-3507" to be "Succeeded or Failed"
Jun 15 03:30:00.389: INFO: Pod "hostpath-symlink-prep-provisioning-3507": Phase="Pending", Reason="", readiness=false. Elapsed: 143.735894ms
Jun 15 03:30:02.533: INFO: Pod "hostpath-symlink-prep-provisioning-3507": Phase="Running", Reason="", readiness=true. Elapsed: 2.287719784s
Jun 15 03:30:04.678: INFO: Pod "hostpath-symlink-prep-provisioning-3507": Phase="Running", Reason="", readiness=false. Elapsed: 4.43275853s
Jun 15 03:30:06.824: INFO: Pod "hostpath-symlink-prep-provisioning-3507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577926627s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:06.824: INFO: Pod "hostpath-symlink-prep-provisioning-3507" satisfied condition "Succeeded or Failed"
Jun 15 03:30:06.824: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3507" in namespace "provisioning-3507"
Jun 15 03:30:06.971: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3507" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/framework/framework.go:188
Jun 15 03:30:07.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "provisioning-3507" for this suite.
... skipping 6 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Inline-volume (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":67,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:07.445: INFO: Only supported for providers [azure] (not aws)
... skipping 151 lines ...
[90mtest/e2e/storage/utils/framework.go:23[0m
CSI Volume expansion
[90mtest/e2e/storage/csi_mock_volume.go:639[0m
should expand volume without restarting pod if nodeExpansion=off
[90mtest/e2e/storage/csi_mock_volume.go:668[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":6,"skipped":40,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ServerSideApply
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 11 lines ...
[1mSTEP[0m: Destroying namespace "apply-7132" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
test/e2e/apimachinery/apply.go:59
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":15,"skipped":131,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:08.052: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir-bindmounted]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 73 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:09.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "resourcequota-3346" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":10,"skipped":79,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:09.675: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 94 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-6f37525f-d785-45b8-8c96-492ce16a3b96
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:30:04.656: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7" in namespace "projected-2900" to be "Succeeded or Failed"
Jun 15 03:30:04.799: INFO: Pod "pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.532546ms
Jun 15 03:30:06.945: INFO: Pod "pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288890285s
Jun 15 03:30:09.088: INFO: Pod "pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432820912s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:09.089: INFO: Pod "pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7" satisfied condition "Succeeded or Failed"
Jun 15 03:30:09.232: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:09.525: INFO: Waiting for pod pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7 to disappear
Jun 15 03:30:09.669: INFO: Pod pod-projected-configmaps-f5d87dda-6832-4df8-9a81-92ccae325ab7 no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.602 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":149,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] ResourceQuota
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 20 lines ...
[32m• [SLOW TEST:18.465 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should create a ResourceQuota and capture the life of a secret. [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":7,"skipped":35,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:11.147: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 68 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs
Jun 15 03:30:05.327: INFO: Waiting up to 5m0s for pod "pod-c42090a1-5764-4913-8de9-eb777a31dce6" in namespace "emptydir-2134" to be "Succeeded or Failed"
Jun 15 03:30:05.470: INFO: Pod "pod-c42090a1-5764-4913-8de9-eb777a31dce6": Phase="Pending", Reason="", readiness=false. Elapsed: 143.185274ms
Jun 15 03:30:07.614: INFO: Pod "pod-c42090a1-5764-4913-8de9-eb777a31dce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286716685s
Jun 15 03:30:09.760: INFO: Pod "pod-c42090a1-5764-4913-8de9-eb777a31dce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433240341s
Jun 15 03:30:11.905: INFO: Pod "pod-c42090a1-5764-4913-8de9-eb777a31dce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577950309s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:11.905: INFO: Pod "pod-c42090a1-5764-4913-8de9-eb777a31dce6" satisfied condition "Succeeded or Failed"
Jun 15 03:30:12.049: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-c42090a1-5764-4913-8de9-eb777a31dce6 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:12.345: INFO: Waiting for pod pod-c42090a1-5764-4913-8de9-eb777a31dce6 to disappear
Jun 15 03:30:12.489: INFO: Pod pod-c42090a1-5764-4913-8de9-eb777a31dce6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.609 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:12.793: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
test/e2e/framework/framework.go:188
... skipping 162 lines ...
[32m• [SLOW TEST:28.470 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":132,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 63 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
One pod requesting one prebound PVC
[90mtest/e2e/storage/persistent_volumes-local.go:211[0m
should be able to mount volume and read from pod1
[90mtest/e2e/storage/persistent_volumes-local.go:234[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":88,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Jun 15 03:30:01.985: INFO: PersistentVolumeClaim pvc-xgqhm found but phase is Pending instead of Bound.
Jun 15 03:30:04.130: INFO: PersistentVolumeClaim pvc-xgqhm found and phase=Bound (13.025262316s)
Jun 15 03:30:04.130: INFO: Waiting up to 3m0s for PersistentVolume local-ngcnd to have phase Bound
Jun 15 03:30:04.273: INFO: PersistentVolume local-ngcnd found and phase=Bound (143.393837ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-r8sb
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:30:04.706: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-r8sb" in namespace "provisioning-8191" to be "Succeeded or Failed"
Jun 15 03:30:04.850: INFO: Pod "pod-subpath-test-preprovisionedpv-r8sb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.74289ms
Jun 15 03:30:06.994: INFO: Pod "pod-subpath-test-preprovisionedpv-r8sb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287969069s
Jun 15 03:30:09.142: INFO: Pod "pod-subpath-test-preprovisionedpv-r8sb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436488064s
Jun 15 03:30:11.286: INFO: Pod "pod-subpath-test-preprovisionedpv-r8sb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580192672s
Jun 15 03:30:13.431: INFO: Pod "pod-subpath-test-preprovisionedpv-r8sb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725455455s
Jun 15 03:30:15.576: INFO: Pod "pod-subpath-test-preprovisionedpv-r8sb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.870260184s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:15.576: INFO: Pod "pod-subpath-test-preprovisionedpv-r8sb" satisfied condition "Succeeded or Failed"
Jun 15 03:30:15.720: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-preprovisionedpv-r8sb container test-container-subpath-preprovisionedpv-r8sb: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:16.014: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-r8sb to disappear
Jun 15 03:30:16.157: INFO: Pod pod-subpath-test-preprovisionedpv-r8sb no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-r8sb
Jun 15 03:30:16.157: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-r8sb" in namespace "provisioning-8191"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":71,"failed":0}
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":17,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:37.271: INFO: >>> kubeConfig: /root/.kube/config
... skipping 7 lines ...
Jun 15 03:29:38.315: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass provisioning-1093q9xz7
[1mSTEP[0m: creating a claim
Jun 15 03:29:38.459: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-47dn
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:29:38.967: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-47dn" in namespace "provisioning-1093" to be "Succeeded or Failed"
Jun 15 03:29:39.112: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 144.678133ms
Jun 15 03:29:41.257: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289784182s
Jun 15 03:29:43.401: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433886258s
Jun 15 03:29:45.547: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579489651s
Jun 15 03:29:47.692: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724841459s
Jun 15 03:29:49.839: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872228931s
Jun 15 03:29:51.985: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 13.017857709s
Jun 15 03:29:54.134: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Pending", Reason="", readiness=false. Elapsed: 15.166861493s
Jun 15 03:29:56.279: INFO: Pod "pod-subpath-test-dynamicpv-47dn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.312043722s
[1mSTEP[0m: Saw pod success
Jun 15 03:29:56.279: INFO: Pod "pod-subpath-test-dynamicpv-47dn" satisfied condition "Succeeded or Failed"
Jun 15 03:29:56.424: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-dynamicpv-47dn container test-container-subpath-dynamicpv-47dn: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:29:56.726: INFO: Waiting for pod pod-subpath-test-dynamicpv-47dn to disappear
Jun 15 03:29:56.870: INFO: Pod pod-subpath-test-dynamicpv-47dn no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-47dn
Jun 15 03:29:56.870: INFO: Deleting pod "pod-subpath-test-dynamicpv-47dn" in namespace "provisioning-1093"
... skipping 33 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/downwardapi_volume.go:108
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:30:12.375: INFO: Waiting up to 5m0s for pod "metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771" in namespace "downward-api-223" to be "Succeeded or Failed"
Jun 15 03:30:12.522: INFO: Pod "metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771": Phase="Pending", Reason="", readiness=false. Elapsed: 146.722485ms
Jun 15 03:30:14.670: INFO: Pod "metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294543092s
Jun 15 03:30:16.815: INFO: Pod "metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43986086s
Jun 15 03:30:18.961: INFO: Pod "metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585922291s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:18.961: INFO: Pod "metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771" satisfied condition "Succeeded or Failed"
Jun 15 03:30:19.106: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:19.400: INFO: Waiting for pod metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771 to disappear
Jun 15 03:30:19.545: INFO: Pod metadata-volume-20cba234-a299-4d84-9f1a-3e0356348771 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:8.617 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
[90mtest/e2e/common/storage/downwardapi_volume.go:108[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:19.856: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
test/e2e/framework/framework.go:188
... skipping 74 lines ...
[90mtest/e2e/kubectl/portforward.go:454[0m
that expects a client request
[90mtest/e2e/kubectl/portforward.go:455[0m
should support a client that connects, sends NO DATA, and disconnects
[90mtest/e2e/kubectl/portforward.go:456[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":7,"skipped":43,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:20.440: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
[32m• [SLOW TEST:7.728 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should release NodePorts on delete
[90mtest/e2e/network/service.go:1592[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":7,"skipped":34,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] Security Context
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:30:14.230: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename security-context
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
test/e2e/node/security_context.go:171
[1mSTEP[0m: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 15 03:30:15.391: INFO: Waiting up to 5m0s for pod "security-context-4ad59ce6-3898-4751-8ea9-930dcf187892" in namespace "security-context-706" to be "Succeeded or Failed"
Jun 15 03:30:15.535: INFO: Pod "security-context-4ad59ce6-3898-4751-8ea9-930dcf187892": Phase="Pending", Reason="", readiness=false. Elapsed: 143.88723ms
Jun 15 03:30:17.682: INFO: Pod "security-context-4ad59ce6-3898-4751-8ea9-930dcf187892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290922385s
Jun 15 03:30:19.829: INFO: Pod "security-context-4ad59ce6-3898-4751-8ea9-930dcf187892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437554548s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:19.829: INFO: Pod "security-context-4ad59ce6-3898-4751-8ea9-930dcf187892" satisfied condition "Succeeded or Failed"
Jun 15 03:30:19.973: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod security-context-4ad59ce6-3898-4751-8ea9-930dcf187892 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:20.813: INFO: Waiting for pod security-context-4ad59ce6-3898-4751-8ea9-930dcf187892 to disappear
Jun 15 03:30:20.957: INFO: Pod security-context-4ad59ce6-3898-4751-8ea9-930dcf187892 no longer exists
[AfterEach] [sig-node] Security Context
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:7.018 seconds][0m
[sig-node] Security Context
[90mtest/e2e/node/framework.go:23[0m
should support seccomp unconfined on the pod [LinuxOnly]
[90mtest/e2e/node/security_context.go:171[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":13,"skipped":134,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:21.283: INFO: Only supported for providers [vsphere] (not aws)
... skipping 49 lines ...
[32m• [SLOW TEST:19.370 seconds][0m
[sig-api-machinery] ResourceQuota
[90mtest/e2e/apimachinery/framework.go:23[0m
should verify ResourceQuota with best effort scope. [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":14,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:21.817: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/framework/framework.go:188
... skipping 100 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:22.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-4979" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":14,"skipped":143,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:23.185: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 29 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:25.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "custom-resource-definition-524" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":15,"skipped":145,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:25.544: INFO: Driver hostPath doesn't support GenericEphemeralVolume -- skipping
... skipping 158 lines ...
Jun 15 03:30:16.399: INFO: PersistentVolumeClaim pvc-l2j5d found but phase is Pending instead of Bound.
Jun 15 03:30:18.544: INFO: PersistentVolumeClaim pvc-l2j5d found and phase=Bound (10.870629062s)
Jun 15 03:30:18.544: INFO: Waiting up to 3m0s for PersistentVolume local-55dph to have phase Bound
Jun 15 03:30:18.693: INFO: PersistentVolume local-55dph found and phase=Bound (148.539584ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-tcps
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 15 03:30:19.127: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-tcps" in namespace "volume-3727" to be "Succeeded or Failed"
Jun 15 03:30:19.271: INFO: Pod "exec-volume-test-preprovisionedpv-tcps": Phase="Pending", Reason="", readiness=false. Elapsed: 144.316684ms
Jun 15 03:30:21.417: INFO: Pod "exec-volume-test-preprovisionedpv-tcps": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289705176s
Jun 15 03:30:23.563: INFO: Pod "exec-volume-test-preprovisionedpv-tcps": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.436193037s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:23.563: INFO: Pod "exec-volume-test-preprovisionedpv-tcps" satisfied condition "Succeeded or Failed"
Jun 15 03:30:23.708: INFO: Trying to get logs from node i-05fe3937684c9d649 pod exec-volume-test-preprovisionedpv-tcps container exec-container-preprovisionedpv-tcps: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:24.008: INFO: Waiting for pod exec-volume-test-preprovisionedpv-tcps to disappear
Jun 15 03:30:24.153: INFO: Pod exec-volume-test-preprovisionedpv-tcps no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-tcps
Jun 15 03:30:24.153: INFO: Deleting pod "exec-volume-test-preprovisionedpv-tcps" in namespace "volume-3727"
... skipping 19 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":107,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Discovery
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 90 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:26.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "discovery-7473" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":15,"skipped":91,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Jun 15 03:30:16.795: INFO: PersistentVolumeClaim pvc-pr8zz found but phase is Pending instead of Bound.
Jun 15 03:30:18.941: INFO: PersistentVolumeClaim pvc-pr8zz found and phase=Bound (13.018505731s)
Jun 15 03:30:18.941: INFO: Waiting up to 3m0s for PersistentVolume local-w8j5b to have phase Bound
Jun 15 03:30:19.086: INFO: PersistentVolume local-w8j5b found and phase=Bound (144.703799ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-rlhj
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:30:19.524: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rlhj" in namespace "provisioning-5662" to be "Succeeded or Failed"
Jun 15 03:30:19.673: INFO: Pod "pod-subpath-test-preprovisionedpv-rlhj": Phase="Pending", Reason="", readiness=false. Elapsed: 148.919909ms
Jun 15 03:30:21.818: INFO: Pod "pod-subpath-test-preprovisionedpv-rlhj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29454089s
Jun 15 03:30:23.965: INFO: Pod "pod-subpath-test-preprovisionedpv-rlhj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441369931s
Jun 15 03:30:26.111: INFO: Pod "pod-subpath-test-preprovisionedpv-rlhj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.587596604s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:26.111: INFO: Pod "pod-subpath-test-preprovisionedpv-rlhj" satisfied condition "Succeeded or Failed"
Jun 15 03:30:26.263: INFO: Trying to get logs from node i-05fe3937684c9d649 pod pod-subpath-test-preprovisionedpv-rlhj container test-container-volume-preprovisionedpv-rlhj: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:26.558: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rlhj to disappear
Jun 15 03:30:26.704: INFO: Pod pod-subpath-test-preprovisionedpv-rlhj no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-rlhj
Jun 15 03:30:26.704: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rlhj" in namespace "provisioning-5662"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":20,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:28.796: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 37 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:29.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "svcaccounts-9541" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":16,"skipped":94,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:29.849: INFO: Only supported for providers [vsphere] (not aws)
... skipping 72 lines ...
[32m• [SLOW TEST:9.618 seconds][0m
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
[90mtest/e2e/apimachinery/framework.go:23[0m
should mutate pod and apply defaults after mutation [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":8,"skipped":51,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 9 lines ...
Jun 15 03:29:58.753: INFO: Creating resource for dynamic PV
Jun 15 03:29:58.753: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi}
[1mSTEP[0m: creating a StorageClass volume-expand-888klslf
[1mSTEP[0m: creating a claim
[1mSTEP[0m: Expanding non-expandable pvc
Jun 15 03:29:59.188: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI}
Jun 15 03:29:59.477: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:01.767: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:03.766: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:05.768: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:07.766: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:09.768: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:11.766: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:13.766: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:15.765: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:17.769: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:19.767: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:21.766: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:23.767: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:25.766: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:27.767: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:29.770: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 5 lines ...
},
VolumeName: "",
StorageClassName: &"volume-expand-888klslf",
... // 3 identical fields
}
Jun 15 03:30:30.059: INFO: Error updating pvc awsrnt97: PersistentVolumeClaim "awsrnt97" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteOnce"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
... skipping 24 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (block volmode)] volume-expand
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should not allow expansion of pvcs without AllowVolumeExpansion property
[90mtest/e2e/storage/testsuites/volume_expand.go:159[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":12,"skipped":80,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:30.824: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 93 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
test/e2e/common/storage/downwardapi_volume.go:43
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:30:26.841: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5" in namespace "downward-api-5498" to be "Succeeded or Failed"
Jun 15 03:30:26.985: INFO: Pod "downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.325422ms
Jun 15 03:30:29.130: INFO: Pod "downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289230402s
Jun 15 03:30:31.279: INFO: Pod "downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.438035374s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:31.279: INFO: Pod "downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5" satisfied condition "Succeeded or Failed"
Jun 15 03:30:31.423: INFO: Trying to get logs from node i-05fe3937684c9d649 pod downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:31.718: INFO: Waiting for pod downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5 to disappear
Jun 15 03:30:31.862: INFO: Pod downwardapi-volume-e1ff3d45-45d3-4da1-b70c-b24a9e5c97d5 no longer exists
[AfterEach] [sig-storage] Downward API volume
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.478 seconds][0m
[sig-storage] Downward API volume
[90mtest/e2e/common/storage/framework.go:23[0m
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":160,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:30:01.397: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename persistent-local-volumes-test
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 99 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume at the same time
[90mtest/e2e/storage/persistent_volumes-local.go:250[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:251[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:32.399: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/framework/framework.go:188
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: dir-bindmounted]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 83 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:32.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "node-lease-test-4735" for this suite.
[32m•[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":13,"skipped":92,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:32.567: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 172 lines ...
Jun 15 03:29:52.506: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 15 03:29:52.662: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathbjnbc] to have phase Bound
Jun 15 03:29:52.808: INFO: PersistentVolumeClaim csi-hostpathbjnbc found but phase is Pending instead of Bound.
Jun 15 03:29:54.953: INFO: PersistentVolumeClaim csi-hostpathbjnbc found and phase=Bound (2.291065516s)
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-g2m8
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:29:55.391: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-g2m8" in namespace "provisioning-892" to be "Succeeded or Failed"
Jun 15 03:29:55.541: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 149.788899ms
Jun 15 03:29:57.689: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297320245s
Jun 15 03:29:59.835: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443633025s
Jun 15 03:30:01.984: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.592159309s
Jun 15 03:30:04.130: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.738765217s
Jun 15 03:30:06.276: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.884713262s
Jun 15 03:30:08.422: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.030391842s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:08.422: INFO: Pod "pod-subpath-test-dynamicpv-g2m8" satisfied condition "Succeeded or Failed"
Jun 15 03:30:08.568: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-dynamicpv-g2m8 container test-container-subpath-dynamicpv-g2m8: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:08.872: INFO: Waiting for pod pod-subpath-test-dynamicpv-g2m8 to disappear
Jun 15 03:30:09.017: INFO: Pod pod-subpath-test-dynamicpv-g2m8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-g2m8
Jun 15 03:30:09.018: INFO: Deleting pod "pod-subpath-test-dynamicpv-g2m8" in namespace "provisioning-892"
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-g2m8
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:30:09.309: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-g2m8" in namespace "provisioning-892" to be "Succeeded or Failed"
Jun 15 03:30:09.455: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 145.273406ms
Jun 15 03:30:11.600: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290705321s
Jun 15 03:30:13.747: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437380303s
Jun 15 03:30:15.893: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583299308s
Jun 15 03:30:18.041: INFO: Pod "pod-subpath-test-dynamicpv-g2m8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.731958268s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:18.041: INFO: Pod "pod-subpath-test-dynamicpv-g2m8" satisfied condition "Succeeded or Failed"
Jun 15 03:30:18.187: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-dynamicpv-g2m8 container test-container-subpath-dynamicpv-g2m8: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:19.044: INFO: Waiting for pod pod-subpath-test-dynamicpv-g2m8 to disappear
Jun 15 03:30:19.188: INFO: Pod pod-subpath-test-dynamicpv-g2m8 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-g2m8
Jun 15 03:30:19.189: INFO: Deleting pod "pod-subpath-test-dynamicpv-g2m8" in namespace "provisioning-892"
... skipping 60 lines ...
[90mtest/e2e/storage/csi_volumes.go:40[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directories when readOnly specified in the volumeSource
[90mtest/e2e/storage/testsuites/subpath.go:397[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":44,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:38.470: INFO: Only supported for providers [vsphere] (not aws)
... skipping 78 lines ...
[90mtest/e2e/apps/framework.go:23[0m
Basic StatefulSet functionality [StatefulSetBasic]
[90mtest/e2e/apps/statefulset.go:101[0m
should list, patch and delete a collection of StatefulSets [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":11,"skipped":90,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:38.950: INFO: Only supported for providers [azure] (not aws)
... skipping 124 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
[90mtest/e2e/storage/framework/testsuite.go:50[0m
(Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
[90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":9,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:43.242: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/framework/framework.go:188
... skipping 11 lines ...
[36mOnly supported for providers [openstack] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1092
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":18,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:30:18.765: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Jun 15 03:30:32.345: INFO: PersistentVolumeClaim pvc-tzxqq found but phase is Pending instead of Bound.
Jun 15 03:30:34.490: INFO: PersistentVolumeClaim pvc-tzxqq found and phase=Bound (10.870343322s)
Jun 15 03:30:34.490: INFO: Waiting up to 3m0s for PersistentVolume local-lzsmc to have phase Bound
Jun 15 03:30:34.635: INFO: PersistentVolume local-lzsmc found and phase=Bound (144.853919ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-xsrt
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:30:35.070: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xsrt" in namespace "provisioning-7807" to be "Succeeded or Failed"
Jun 15 03:30:35.215: INFO: Pod "pod-subpath-test-preprovisionedpv-xsrt": Phase="Pending", Reason="", readiness=false. Elapsed: 144.861751ms
Jun 15 03:30:37.360: INFO: Pod "pod-subpath-test-preprovisionedpv-xsrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289444352s
Jun 15 03:30:39.506: INFO: Pod "pod-subpath-test-preprovisionedpv-xsrt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435900553s
Jun 15 03:30:41.652: INFO: Pod "pod-subpath-test-preprovisionedpv-xsrt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.582133131s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:41.652: INFO: Pod "pod-subpath-test-preprovisionedpv-xsrt" satisfied condition "Succeeded or Failed"
Jun 15 03:30:41.797: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-preprovisionedpv-xsrt container test-container-volume-preprovisionedpv-xsrt: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:42.099: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xsrt to disappear
Jun 15 03:30:42.248: INFO: Pod pod-subpath-test-preprovisionedpv-xsrt no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-xsrt
Jun 15 03:30:42.248: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xsrt" in namespace "provisioning-7807"
... skipping 33 lines ...
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's cpu limit [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test downward API volume plugin
Jun 15 03:30:39.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3" in namespace "projected-6706" to be "Succeeded or Failed"
Jun 15 03:30:39.855: INFO: Pod "downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3": Phase="Pending", Reason="", readiness=false. Elapsed: 145.607294ms
Jun 15 03:30:42.002: INFO: Pod "downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292785057s
Jun 15 03:30:44.149: INFO: Pod "downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.440169901s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:44.149: INFO: Pod "downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3" satisfied condition "Succeeded or Failed"
Jun 15 03:30:44.295: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3 container client-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:44.590: INFO: Waiting for pod downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3 to disappear
Jun 15 03:30:44.735: INFO: Pod downwardapi-volume-f11d1fd8-130f-40c0-b1ec-7374f80b74c3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.497 seconds][0m
[sig-storage] Projected downwardAPI
[90mtest/e2e/common/storage/framework.go:23[0m
should provide container's cpu limit [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Services
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 32 lines ...
[32m• [SLOW TEST:16.240 seconds][0m
[sig-network] Services
[90mtest/e2e/network/common/framework.go:23[0m
should be able to change the type from ClusterIP to ExternalName [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":9,"skipped":53,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] health handlers
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 10 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:46.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "health-6759" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":8,"skipped":62,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:47.162: INFO: Only supported for providers [openstack] (not aws)
... skipping 33 lines ...
Jun 15 03:30:10.760: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi}
[1mSTEP[0m: creating a StorageClass provisioning-5832mwhph
[1mSTEP[0m: creating a claim
Jun 15 03:30:10.904: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
[1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5nfj
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:30:11.341: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5nfj" in namespace "provisioning-5832" to be "Succeeded or Failed"
Jun 15 03:30:11.486: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 144.344071ms
Jun 15 03:30:13.630: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288199877s
Jun 15 03:30:15.774: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432693203s
Jun 15 03:30:17.920: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578822955s
Jun 15 03:30:20.070: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728439846s
Jun 15 03:30:22.216: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.874789007s
Jun 15 03:30:24.362: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.020982578s
Jun 15 03:30:26.508: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.166117686s
Jun 15 03:30:28.655: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Pending", Reason="", readiness=false. Elapsed: 17.313476487s
Jun 15 03:30:30.800: INFO: Pod "pod-subpath-test-dynamicpv-5nfj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.459006537s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:30.801: INFO: Pod "pod-subpath-test-dynamicpv-5nfj" satisfied condition "Succeeded or Failed"
Jun 15 03:30:30.944: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-dynamicpv-5nfj container test-container-volume-dynamicpv-5nfj: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:31.240: INFO: Waiting for pod pod-subpath-test-dynamicpv-5nfj to disappear
Jun 15 03:30:31.386: INFO: Pod pod-subpath-test-dynamicpv-5nfj no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-5nfj
Jun 15 03:30:31.386: INFO: Deleting pod "pod-subpath-test-dynamicpv-5nfj" in namespace "provisioning-5832"
... skipping 20 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing directory
[90mtest/e2e/storage/testsuites/subpath.go:207[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:48.141: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/framework/framework.go:188
... skipping 89 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":70,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:48.443: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
[90mtest/e2e/storage/testsuites/volumes.go:161[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":19,"skipped":76,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:30:44.237: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename projected
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-cbac2699-02ea-4fb5-ab52-e47b4b4eea60
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:30:45.540: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc" in namespace "projected-1268" to be "Succeeded or Failed"
Jun 15 03:30:45.684: INFO: Pod "pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 144.000536ms
Jun 15 03:30:47.830: INFO: Pod "pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289821099s
Jun 15 03:30:49.975: INFO: Pod "pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435030315s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:49.975: INFO: Pod "pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc" satisfied condition "Succeeded or Failed"
Jun 15 03:30:50.120: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:30:50.415: INFO: Waiting for pod pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc to disappear
Jun 15 03:30:50.559: INFO: Pod pod-projected-configmaps-58122f7a-d6be-45c5-828a-646342447dbc no longer exists
[AfterEach] [sig-storage] Projected configMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.613 seconds][0m
[sig-storage] Projected configMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":76,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:50.929: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 112 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
should be restarted with a failing exec liveness probe that took longer than the timeout
[90mtest/e2e/common/node/container_probe.go:263[0m
[90m------------------------------[0m
[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":13,"skipped":70,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:30:51.133: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
Jun 15 03:29:37.769: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-29hj9hd 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-29 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-29hj9hd,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-29 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-29hj9hd,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},}
[1mSTEP[0m: Creating a StorageClass
[1mSTEP[0m: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-29 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-29hj9hd,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},}
[1mSTEP[0m: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-29hj9hd fe23996f-fb3e-42fc-bf2f-a0750ccca863 11853 0 2022-06-15 03:29:37 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update storage.k8s.io/v1 2022-06-15 03:29:37 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-59flf pvc- provisioning-29 b245b187-ee69-4b44-b9eb-e190be9a4fc3 11869 0 2022-06-15 03:29:38 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection] [{e2e.test Update v1 2022-06-15 03:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-29hj9hd,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},}
[1mSTEP[0m: Deleting pod pod-c4fbe909-b8b2-4503-83b2-afe458559c9d in namespace provisioning-29
[1mSTEP[0m: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Jun 15 03:29:47.399: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-qcg2b" in namespace "provisioning-29" to be "Succeeded or Failed"
Jun 15 03:29:47.544: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 144.697469ms
Jun 15 03:29:49.689: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289786006s
Jun 15 03:29:51.834: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434663523s
Jun 15 03:29:53.980: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580435326s
Jun 15 03:29:56.124: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724641255s
Jun 15 03:29:58.268: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86884357s
... skipping 2 lines ...
Jun 15 03:30:04.706: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.306840185s
Jun 15 03:30:06.851: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.451367624s
Jun 15 03:30:08.996: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.596141685s
Jun 15 03:30:11.141: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.741138746s
Jun 15 03:30:13.285: INFO: Pod "pvc-volume-tester-writer-qcg2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.885486318s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:13.285: INFO: Pod "pvc-volume-tester-writer-qcg2b" satisfied condition "Succeeded or Failed"
Jun 15 03:30:13.855: INFO: Pod pvc-volume-tester-writer-qcg2b has the following logs:
Jun 15 03:30:13.855: INFO: Deleting pod "pvc-volume-tester-writer-qcg2b" in namespace "provisioning-29"
Jun 15 03:30:14.003: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-qcg2b" to be fully deleted
[1mSTEP[0m: checking the created volume has the correct mount options, is readable and retains data on the same node "i-05fe3937684c9d649"
Jun 15 03:30:14.584: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-nkg58" in namespace "provisioning-29" to be "Succeeded or Failed"
Jun 15 03:30:14.729: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 144.387923ms
Jun 15 03:30:16.875: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290070042s
Jun 15 03:30:19.021: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436893325s
Jun 15 03:30:21.167: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582125694s
Jun 15 03:30:23.312: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.727907868s
Jun 15 03:30:25.458: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.873403354s
Jun 15 03:30:27.604: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 13.019156965s
Jun 15 03:30:29.750: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 15.165361893s
Jun 15 03:30:31.894: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Pending", Reason="", readiness=false. Elapsed: 17.309766619s
Jun 15 03:30:34.039: INFO: Pod "pvc-volume-tester-reader-nkg58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.454582143s
[1mSTEP[0m: Saw pod success
Jun 15 03:30:34.039: INFO: Pod "pvc-volume-tester-reader-nkg58" satisfied condition "Succeeded or Failed"
Jun 15 03:30:34.335: INFO: Pod pvc-volume-tester-reader-nkg58 has the following logs: hello world
Jun 15 03:30:34.335: INFO: Deleting pod "pvc-volume-tester-reader-nkg58" in namespace "provisioning-29"
Jun 15 03:30:34.484: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-nkg58" to be fully deleted
Jun 15 03:30:34.633: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-59flf] to have phase Bound
Jun 15 03:30:34.777: INFO: PersistentVolumeClaim pvc-59flf found and phase=Bound (143.913029ms)
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (default fs)] provisioning
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should provision storage with mount options
[90mtest/e2e/storage/testsuites/provisioning.go:187[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":11,"skipped":83,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 14 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:30:52.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "kubectl-1438" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":14,"skipped":79,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] Networking
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 123 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium
Jun 15 03:30:54.381: INFO: Waiting up to 5m0s for pod "pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951" in namespace "emptydir-3480" to be "Succeeded or Failed"
Jun 15 03:30:54.525: INFO: Pod "pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951": Phase="Pending", Reason="", readiness=false. Elapsed: 144.774126ms
Jun 15 03:30:56.671: INFO: Pod "pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290762124s
Jun 15 03:30:58.817: INFO: Pod "pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436624059s
Jun 15 03:31:00.963: INFO: Pod "pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582560919s
Jun 15 03:31:03.108: INFO: Pod "pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.727792781s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:03.109: INFO: Pod "pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951" satisfied condition "Succeeded or Failed"
Jun 15 03:31:03.253: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:03.548: INFO: Waiting for pod pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951 to disappear
Jun 15 03:31:03.693: INFO: Pod pod-8fbcfc6e-3e34-4a5e-bd44-22b30e14a951 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:10.773 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":83,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:04.013: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Jun 15 03:30:45.769: INFO: PersistentVolumeClaim pvc-snrjh found but phase is Pending instead of Bound.
Jun 15 03:30:47.914: INFO: PersistentVolumeClaim pvc-snrjh found and phase=Bound (10.875838782s)
Jun 15 03:30:47.914: INFO: Waiting up to 3m0s for PersistentVolume local-9nq6c to have phase Bound
Jun 15 03:30:48.058: INFO: PersistentVolume local-9nq6c found and phase=Bound (144.047209ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-9zcp
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:30:48.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9zcp" in namespace "provisioning-8597" to be "Succeeded or Failed"
Jun 15 03:30:48.639: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp": Phase="Pending", Reason="", readiness=false. Elapsed: 144.117244ms
Jun 15 03:30:50.784: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289582068s
Jun 15 03:30:52.931: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435733078s
Jun 15 03:30:55.076: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580743914s
Jun 15 03:30:57.220: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725558667s
Jun 15 03:30:59.367: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872090165s
Jun 15 03:31:01.512: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.01684772s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:01.512: INFO: Pod "pod-subpath-test-preprovisionedpv-9zcp" satisfied condition "Succeeded or Failed"
Jun 15 03:31:01.656: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod pod-subpath-test-preprovisionedpv-9zcp container test-container-subpath-preprovisionedpv-9zcp: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:01.953: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9zcp to disappear
Jun 15 03:31:02.097: INFO: Pod pod-subpath-test-preprovisionedpv-9zcp no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-9zcp
Jun 15 03:31:02.097: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9zcp" in namespace "provisioning-8597"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support readOnly file specified in the volumeMount [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:382[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":17,"skipped":164,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Jun 15 03:30:46.113: INFO: PersistentVolumeClaim pvc-7jjpg found but phase is Pending instead of Bound.
Jun 15 03:30:48.258: INFO: PersistentVolumeClaim pvc-7jjpg found and phase=Bound (4.43290783s)
Jun 15 03:30:48.258: INFO: Waiting up to 3m0s for PersistentVolume local-rb5bb to have phase Bound
Jun 15 03:30:48.404: INFO: PersistentVolume local-rb5bb found and phase=Bound (145.583067ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-dnwb
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:30:48.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dnwb" in namespace "provisioning-4836" to be "Succeeded or Failed"
Jun 15 03:30:48.981: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.676089ms
Jun 15 03:30:51.126: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288791285s
Jun 15 03:30:53.272: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434984338s
Jun 15 03:30:55.417: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579384947s
Jun 15 03:30:57.562: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724724605s
Jun 15 03:30:59.710: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872488726s
Jun 15 03:31:01.854: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.016698421s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:01.854: INFO: Pod "pod-subpath-test-preprovisionedpv-dnwb" satisfied condition "Succeeded or Failed"
Jun 15 03:31:01.998: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-dnwb container test-container-subpath-preprovisionedpv-dnwb: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:02.300: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dnwb to disappear
Jun 15 03:31:02.444: INFO: Pod pod-subpath-test-preprovisionedpv-dnwb no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-dnwb
Jun 15 03:31:02.444: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dnwb" in namespace "provisioning-4836"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support existing single file [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:221[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":12,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:04.431: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
test/e2e/framework/framework.go:188
... skipping 100 lines ...
[90mtest/e2e/storage/testsuites/capacity.go:112[0m
[36mDriver csi-hostpath doesn't publish storage capacity -- skipping[0m
test/e2e/storage/testsuites/capacity.go:78
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:04.500: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
test/e2e/framework/framework.go:188
... skipping 117 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (default fs)] provisioning
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should mount multiple PV pointing to the same storage on the same node
[90mtest/e2e/storage/testsuites/provisioning.go:518[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node","total":-1,"completed":8,"skipped":66,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-api-machinery] Garbage collector
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 52 lines ...
[32m• [SLOW TEST:13.832 seconds][0m
[sig-api-machinery] Garbage collector
[90mtest/e2e/apimachinery/framework.go:23[0m
should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":21,"skipped":103,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:05.004: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 19 lines ...
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:31:04.937: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename secrets
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-26e3fdc0-35b0-4276-896b-4e3231aa2901
[AfterEach] [sig-node] Secrets
test/e2e/framework/framework.go:188
Jun 15 03:31:06.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "secrets-7500" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:06.434: INFO: Only supported for providers [azure] (not aws)
... skipping 51 lines ...
Jun 15 03:30:46.685: INFO: PersistentVolumeClaim pvc-n2zkk found but phase is Pending instead of Bound.
Jun 15 03:30:48.833: INFO: PersistentVolumeClaim pvc-n2zkk found and phase=Bound (13.025279593s)
Jun 15 03:30:48.833: INFO: Waiting up to 3m0s for PersistentVolume local-xx9fq to have phase Bound
Jun 15 03:30:48.977: INFO: PersistentVolume local-xx9fq found and phase=Bound (144.134895ms)
[1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-g8ds
[1mSTEP[0m: Creating a pod to test exec-volume-test
Jun 15 03:30:49.413: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-g8ds" in namespace "volume-5372" to be "Succeeded or Failed"
Jun 15 03:30:49.558: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds": Phase="Pending", Reason="", readiness=false. Elapsed: 144.187876ms
Jun 15 03:30:51.703: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289884063s
Jun 15 03:30:53.849: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435560089s
Jun 15 03:30:55.994: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580368105s
Jun 15 03:30:58.141: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72733273s
Jun 15 03:31:00.286: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872293184s
Jun 15 03:31:02.431: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.017994585s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:02.432: INFO: Pod "exec-volume-test-preprovisionedpv-g8ds" satisfied condition "Succeeded or Failed"
Jun 15 03:31:02.576: INFO: Trying to get logs from node i-0a5092cc559ae3bff pod exec-volume-test-preprovisionedpv-g8ds container exec-container-preprovisionedpv-g8ds: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:02.870: INFO: Waiting for pod exec-volume-test-preprovisionedpv-g8ds to disappear
Jun 15 03:31:03.015: INFO: Pod exec-volume-test-preprovisionedpv-g8ds no longer exists
[1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-g8ds
Jun 15 03:31:03.015: INFO: Deleting pod "exec-volume-test-preprovisionedpv-g8ds" in namespace "volume-5372"
... skipping 28 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (ext4)] volumes
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should allow exec of files on the volume
[90mtest/e2e/storage/testsuites/volumes.go:198[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":17,"skipped":112,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:06.833: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 44 lines ...
Jun 15 03:30:32.129: INFO: PersistentVolumeClaim pvc-khwhg found but phase is Pending instead of Bound.
Jun 15 03:30:34.275: INFO: PersistentVolumeClaim pvc-khwhg found and phase=Bound (8.723671672s)
Jun 15 03:30:34.275: INFO: Waiting up to 3m0s for PersistentVolume local-bmmj2 to have phase Bound
Jun 15 03:30:34.420: INFO: PersistentVolume local-bmmj2 found and phase=Bound (144.597527ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-76l7
[1mSTEP[0m: Creating a pod to test atomic-volume-subpath
Jun 15 03:30:34.860: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-76l7" in namespace "provisioning-3623" to be "Succeeded or Failed"
Jun 15 03:30:35.005: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Pending", Reason="", readiness=false. Elapsed: 145.630837ms
Jun 15 03:30:37.150: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290512428s
Jun 15 03:30:39.296: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436357536s
Jun 15 03:30:41.442: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582052844s
Jun 15 03:30:43.587: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Running", Reason="", readiness=true. Elapsed: 8.727254602s
Jun 15 03:30:45.732: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Running", Reason="", readiness=true. Elapsed: 10.87243696s
... skipping 4 lines ...
Jun 15 03:30:56.467: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Running", Reason="", readiness=true. Elapsed: 21.606747663s
Jun 15 03:30:58.612: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Running", Reason="", readiness=true. Elapsed: 23.752328169s
Jun 15 03:31:00.756: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Running", Reason="", readiness=true. Elapsed: 25.896670899s
Jun 15 03:31:02.901: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Running", Reason="", readiness=true. Elapsed: 28.041149408s
Jun 15 03:31:05.045: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.185420525s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:05.045: INFO: Pod "pod-subpath-test-preprovisionedpv-76l7" satisfied condition "Succeeded or Failed"
Jun 15 03:31:05.189: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-subpath-test-preprovisionedpv-76l7 container test-container-subpath-preprovisionedpv-76l7: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:05.482: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-76l7 to disappear
Jun 15 03:31:05.626: INFO: Pod pod-subpath-test-preprovisionedpv-76l7 no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-76l7
Jun 15 03:31:05.626: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-76l7" in namespace "provisioning-3623"
... skipping 21 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support file as subpath [LinuxOnly]
[90mtest/e2e/storage/testsuites/subpath.go:232[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":38,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:07.616: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 28 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: local][LocalVolumeType: tmpfs]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 3 lines ...
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:29:16.831: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename cronjob
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
test/e2e/apps/cronjob.go:291
[1mSTEP[0m: Creating an AllowConcurrent cronjob with custom history limit
[1mSTEP[0m: Ensuring a finished job exists
[1mSTEP[0m: Ensuring a finished job exists by listing jobs explicitly
[1mSTEP[0m: Ensuring this job and its pods does not exist anymore
[1mSTEP[0m: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
[1mSTEP[0m: Destroying namespace "cronjob-4800" for this suite.
[32m• [SLOW TEST:114.315 seconds][0m
[sig-apps] CronJob
[90mtest/e2e/apps/framework.go:23[0m
should delete failed finished jobs with limit of one job
[90mtest/e2e/apps/cronjob.go:291[0m
[90m------------------------------[0m
[BeforeEach] [sig-network] DNS
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:30:32.720: INFO: >>> kubeConfig: /root/.kube/config
... skipping 24 lines ...
[1mSTEP[0m: retrieving the pod
[1mSTEP[0m: looking for the results for each expected name from probers
Jun 15 03:30:39.921: INFO: File wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:40.066: INFO: File jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:40.067: INFO: Lookups using dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e failed for: [wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local]
Jun 15 03:30:45.213: INFO: File wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:45.359: INFO: File jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:45.359: INFO: Lookups using dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e failed for: [wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local]
Jun 15 03:30:50.215: INFO: File wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:50.360: INFO: File jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:50.360: INFO: Lookups using dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e failed for: [wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local]
Jun 15 03:30:55.215: INFO: File wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:55.361: INFO: File jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:30:55.361: INFO: Lookups using dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e failed for: [wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local]
Jun 15 03:31:00.220: INFO: File wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:31:00.365: INFO: File jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:31:00.365: INFO: Lookups using dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e failed for: [wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local]
Jun 15 03:31:05.215: INFO: File wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:31:05.360: INFO: File jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local from pod dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 15 03:31:05.360: INFO: Lookups using dns-4940/dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e failed for: [wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local jessie_udp@dns-test-service-3.dns-4940.svc.cluster.local]
Jun 15 03:31:10.363: INFO: DNS probes using dns-test-7edb600f-4354-48b5-9be0-74c5aeeaf36e succeeded
[1mSTEP[0m: deleting the pod
[1mSTEP[0m: changing the service to type=ClusterIP
[1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4940.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4940.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
[32m• [SLOW TEST:41.716 seconds][0m
[sig-network] DNS
[90mtest/e2e/network/common/framework.go:23[0m
should provide DNS for ExternalName services [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":14,"skipped":117,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:14.450: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/framework/framework.go:188
... skipping 78 lines ...
[36mDriver local doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":10,"skipped":75,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:31:11.159: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename kubectl
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 53 lines ...
[32m• [SLOW TEST:13.188 seconds][0m
[sig-api-machinery] Watchers
[90mtest/e2e/apimachinery/framework.go:23[0m
should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":10,"skipped":86,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:19.708: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
test/e2e/framework/framework.go:188
... skipping 2 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: gluster]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for node OS distro [gci ubuntu custom] (not debian)[0m
test/e2e/storage/drivers/in_tree.go:263
[90m------------------------------[0m
... skipping 39 lines ...
[32m• [SLOW TEST:15.739 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should run a job to completion when tasks succeed
[90mtest/e2e/apps/job.go:81[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":22,"skipped":111,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:20.818: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 107 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support multiple inline ephemeral volumes
[90mtest/e2e/storage/testsuites/ephemeral.go:254[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":16,"skipped":141,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-node] RuntimeClass
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 7 lines ...
test/e2e/framework/framework.go:188
Jun 15 03:31:23.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
[1mSTEP[0m: Destroying namespace "runtimeclass-3046" for this suite.
[32m•[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":142,"failed":0}
[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:23.771: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: azure-disk]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (immediate binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mOnly supported for providers [azure] (not aws)[0m
test/e2e/storage/drivers/in_tree.go:1576
[90m------------------------------[0m
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":11,"skipped":75,"failed":0}
[BeforeEach] [sig-scheduling] LimitRange
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
Jun 15 03:31:14.996: INFO: >>> kubeConfig: /root/.kube/config
[1mSTEP[0m: Building a namespace api object, basename limitrange
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
... skipping 46 lines ...
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating configMap with name configmap-test-volume-9e4c573f-40d3-4fde-adce-e6445178244d
[1mSTEP[0m: Creating a pod to test consume configMaps
Jun 15 03:31:21.032: INFO: Waiting up to 5m0s for pod "pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5" in namespace "configmap-2725" to be "Succeeded or Failed"
Jun 15 03:31:21.176: INFO: Pod "pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.142642ms
Jun 15 03:31:23.321: INFO: Pod "pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289488174s
Jun 15 03:31:25.467: INFO: Pod "pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434946785s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:25.467: INFO: Pod "pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5" satisfied condition "Succeeded or Failed"
Jun 15 03:31:25.611: INFO: Trying to get logs from node i-0b28fcd2505512be6 pod pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5 container agnhost-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:25.907: INFO: Waiting for pod pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5 to disappear
Jun 15 03:31:26.053: INFO: Pod pod-configmaps-fe15eeda-a710-480a-90af-1b2b487041f5 no longer exists
[AfterEach] [sig-storage] ConfigMap
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.614 seconds][0m
[sig-storage] ConfigMap
[90mtest/e2e/common/storage/framework.go:23[0m
should be consumable from pods in volume [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":88,"failed":0}
[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:26.365: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 161 lines ...
[90mtest/e2e/common/node/framework.go:23[0m
when create a pod with lifecycle hook
[90mtest/e2e/common/node/lifecycle_hook.go:46[0m
should execute prestop http hook properly [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":109,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:29.800: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
[1mSTEP[0m: Building a namespace api object, basename emptydir
[1mSTEP[0m: Waiting for a default service account to be provisioned in namespace
[1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/framework/framework.go:652
[1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium
Jun 15 03:31:24.961: INFO: Waiting up to 5m0s for pod "pod-afbc0e72-3e8d-4443-9616-030b44fc5f54" in namespace "emptydir-3281" to be "Succeeded or Failed"
Jun 15 03:31:25.104: INFO: Pod "pod-afbc0e72-3e8d-4443-9616-030b44fc5f54": Phase="Pending", Reason="", readiness=false. Elapsed: 143.367392ms
Jun 15 03:31:27.250: INFO: Pod "pod-afbc0e72-3e8d-4443-9616-030b44fc5f54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28907645s
Jun 15 03:31:29.394: INFO: Pod "pod-afbc0e72-3e8d-4443-9616-030b44fc5f54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433211152s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:29.394: INFO: Pod "pod-afbc0e72-3e8d-4443-9616-030b44fc5f54" satisfied condition "Succeeded or Failed"
Jun 15 03:31:29.537: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-afbc0e72-3e8d-4443-9616-030b44fc5f54 container test-container: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:29.831: INFO: Waiting for pod pod-afbc0e72-3e8d-4443-9616-030b44fc5f54 to disappear
Jun 15 03:31:29.974: INFO: Pod pod-afbc0e72-3e8d-4443-9616-030b44fc5f54 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
test/e2e/framework/framework.go:188
... skipping 4 lines ...
[32m• [SLOW TEST:6.452 seconds][0m
[sig-storage] EmptyDir volumes
[90mtest/e2e/common/storage/framework.go:23[0m
should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[90mtest/e2e/framework/framework.go:652[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":152,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:30.306: INFO: Only supported for providers [vsphere] (not aws)
... skipping 126 lines ...
[32m• [SLOW TEST:44.172 seconds][0m
[sig-apps] Job
[90mtest/e2e/apps/framework.go:23[0m
should not create pods when created in suspend state
[90mtest/e2e/apps/job.go:103[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":10,"skipped":54,"failed":0}
[36mS[0m[36mS[0m[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:30.614: INFO: Only supported for providers [vsphere] (not aws)
... skipping 118 lines ...
Jun 15 03:31:16.604: INFO: PersistentVolumeClaim pvc-xccpq found but phase is Pending instead of Bound.
Jun 15 03:31:18.749: INFO: PersistentVolumeClaim pvc-xccpq found and phase=Bound (8.72728484s)
Jun 15 03:31:18.749: INFO: Waiting up to 3m0s for PersistentVolume local-4wssq to have phase Bound
Jun 15 03:31:18.893: INFO: PersistentVolume local-4wssq found and phase=Bound (143.902014ms)
[1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-k7kk
[1mSTEP[0m: Creating a pod to test subpath
Jun 15 03:31:19.329: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k7kk" in namespace "provisioning-5258" to be "Succeeded or Failed"
Jun 15 03:31:19.474: INFO: Pod "pod-subpath-test-preprovisionedpv-k7kk": Phase="Pending", Reason="", readiness=false. Elapsed: 144.319303ms
Jun 15 03:31:21.618: INFO: Pod "pod-subpath-test-preprovisionedpv-k7kk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288804426s
Jun 15 03:31:23.763: INFO: Pod "pod-subpath-test-preprovisionedpv-k7kk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433200938s
Jun 15 03:31:25.909: INFO: Pod "pod-subpath-test-preprovisionedpv-k7kk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579429263s
[1mSTEP[0m: Saw pod success
Jun 15 03:31:25.909: INFO: Pod "pod-subpath-test-preprovisionedpv-k7kk" satisfied condition "Succeeded or Failed"
Jun 15 03:31:26.054: INFO: Trying to get logs from node i-08d19c5de9fb20ea1 pod pod-subpath-test-preprovisionedpv-k7kk container test-container-volume-preprovisionedpv-k7kk: <nil>
[1mSTEP[0m: delete the pod
Jun 15 03:31:26.352: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k7kk to disappear
Jun 15 03:31:26.496: INFO: Pod pod-subpath-test-preprovisionedpv-k7kk no longer exists
[1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-k7kk
Jun 15 03:31:26.496: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k7kk" in namespace "provisioning-5258"
... skipping 34 lines ...
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Pre-provisioned PV (default fs)] subPath
[90mtest/e2e/storage/framework/testsuite.go:50[0m
should support non-existent path
[90mtest/e2e/storage/testsuites/subpath.go:196[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":11,"skipped":121,"failed":0}
[36mS[0m[36mS[0m
[90m------------------------------[0m
[BeforeEach] [sig-storage] PersistentVolumes-local
test/e2e/framework/framework.go:187
[1mSTEP[0m: Creating a kubernetes client
... skipping 83 lines ...
[90mtest/e2e/storage/persistent_volumes-local.go:194[0m
Two pods mounting a local volume one after the other
[90mtest/e2e/storage/persistent_volumes-local.go:256[0m
should be able to write from pod1 and read from pod2
[90mtest/e2e/storage/persistent_volumes-local.go:257[0m
[90m------------------------------[0m
{"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":18,"skipped":124,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/storage/framework/testsuite.go:51
Jun 15 03:31:31.643: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
test/e2e/framework/framework.go:188
... skipping 48 lines ...
[sig-storage] In-tree Volumes
[90mtest/e2e/storage/utils/framework.go:23[0m
[Driver: hostPathSymlink]
[90mtest/e2e/storage/in_tree_volumes.go:63[0m
[Testpattern: Dynamic PV (delayed binding)] topology
[90mtest/e2e/storage/framework/testsuite.go:50[0m
[36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach][0m
[90mtest/e2e/storage/testsuites/topology.go:194[0m
[36mDriver hostPathSymlink doesn't support DynamicPV -- skipping[0m
test/e2e/storage/framework/testsuite.go:116
[90m------------------------------[0m
... skipping 32931 lines ...
29 numNATRules=68\nI0615 03:39:01.258918 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.14041ms\"\nI0615 03:39:04.974528 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\" portCount=1\nI0615 03:39:04.974578 10 service.go:437] \"Adding new service port\" portName=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\" servicePort=\"172.20.31.218:9443/TCP\"\nI0615 03:39:04.974615 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:05.007597 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=68\nI0615 03:39:05.011455 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.882615ms\"\nI0615 03:39:05.011519 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:05.035506 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=72\nI0615 03:39:05.039214 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.728558ms\"\nI0615 03:39:07.948839 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:07.977716 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=70\nI0615 03:39:07.982149 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.395066ms\"\nI0615 03:39:07.982436 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:08.008756 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=6 numNATChains=30 numNATRules=62\nI0615 03:39:08.012410 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.064132ms\"\nI0615 03:39:09.012648 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:09.042965 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=57\nI0615 03:39:09.047844 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.271943ms\"\nI0615 03:39:10.093986 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\" portCount=0\nI0615 03:39:10.094029 10 service.go:462] \"Removing service port\" portName=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\"\nI0615 03:39:10.094071 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:10.122653 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=55\nI0615 03:39:10.127531 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.499706ms\"\nI0615 03:39:11.127712 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:11.152585 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=53\nI0615 03:39:11.157086 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.428867ms\"\nI0615 03:39:17.538171 10 service.go:322] \"Service updated ports\" service=\"services-1919/service-headless-toggled\" portCount=0\nI0615 03:39:17.538212 10 service.go:462] \"Removing service port\" portName=\"services-1919/service-headless-toggled\"\nI0615 03:39:17.538253 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:17.565521 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=47\nI0615 03:39:17.576170 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.955312ms\"\nI0615 03:39:19.144237 10 service.go:322] \"Service updated ports\" service=\"webhook-5351/e2e-test-webhook\" portCount=1\nI0615 03:39:19.144286 10 service.go:437] \"Adding new service port\" portName=\"webhook-5351/e2e-test-webhook\" servicePort=\"172.20.19.179:8443/TCP\"\nI0615 03:39:19.144326 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:19.183152 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=43\nI0615 03:39:19.189005 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.722262ms\"\nI0615 03:39:19.189091 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:19.228651 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=47\nI0615 03:39:19.233872 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.824343ms\"\nI0615 03:39:21.308581 10 service.go:322] \"Service updated ports\" service=\"webhook-5351/e2e-test-webhook\" portCount=0\nI0615 03:39:21.308636 10 service.go:462] \"Removing service port\" portName=\"webhook-5351/e2e-test-webhook\"\nI0615 03:39:21.308693 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:21.336782 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=45\nI0615 03:39:21.340603 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.980772ms\"\nI0615 03:39:21.340804 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:21.369208 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=43\nI0615 03:39:21.373082 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.447328ms\"\nI0615 03:39:21.848643 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=1\nI0615 03:39:22.282562 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=1\nI0615 03:39:22.373889 10 service.go:437] \"Adding new service port\" portName=\"services-2860/test-service-9s6g6:http\" servicePort=\"172.20.11.75:80/TCP\"\nI0615 03:39:22.373945 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:22.402001 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=43\nI0615 03:39:22.409741 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.875006ms\"\nI0615 03:39:23.005354 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=1\nI0615 03:39:23.293925 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=0\nI0615 03:39:23.410173 10 service.go:462] \"Removing service port\" portName=\"services-2860/test-service-9s6g6:http\"\nI0615 03:39:23.410233 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:23.438551 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=43\nI0615 03:39:23.443707 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.586149ms\"\nI0615 03:39:29.989665 10 service.go:322] \"Service updated ports\" service=\"deployment-4350/test-rolling-update-with-lb\" portCount=0\nI0615 03:39:29.989708 10 service.go:462] \"Removing service port\" portName=\"deployment-4350/test-rolling-update-with-lb\"\nI0615 03:39:29.989749 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:30.021339 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=43\nI0615 03:39:30.025356 10 service_health.go:107] \"Closing healthcheck\" service=\"deployment-4350/test-rolling-update-with-lb\" port=31657\nE0615 03:39:30.025426 10 service_health.go:187] \"Healthcheck closed\" err=\"accept tcp [::]:31657: use of closed network connection\" service=\"deployment-4350/test-rolling-update-with-lb\"\nI0615 03:39:30.025446 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.741662ms\"\nI0615 03:39:31.873551 10 service.go:322] \"Service updated ports\" service=\"services-1919/service-headless-toggled\" portCount=1\nI0615 03:39:31.873609 10 service.go:437] \"Adding new service port\" portName=\"services-1919/service-headless-toggled\" servicePort=\"172.20.6.132:80/TCP\"\nI0615 03:39:31.873648 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:31.901448 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=53\nI0615 03:39:31.906596 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.993783ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-i-08d19c5de9fb20ea1 ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-i-0a5092cc559ae3bff ====\n2022/06/15 03:20:43 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--conntrack-max-per-core=131072,--hostname-override=i-0a5092cc559ae3bff,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://api.internal.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io,--oom-score-adj=-998,--v=2\n2022/06/15 03:20:43 Now listening for interrupts\nI0615 03:20:44.015299 10 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0615 03:20:44.015622 10 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0615 03:20:44.015710 10 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0615 03:20:44.015790 10 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0615 03:20:44.015863 10 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0615 03:20:44.015943 10 flags.go:64] FLAG: --cleanup=\"false\"\nI0615 03:20:44.016015 10 flags.go:64] FLAG: --cluster-cidr=\"\"\nI0615 03:20:44.016088 10 flags.go:64] FLAG: --config=\"\"\nI0615 03:20:44.016169 10 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0615 03:20:44.016249 10 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0615 03:20:44.016268 10 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0615 03:20:44.016283 10 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0615 03:20:44.016297 10 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0615 03:20:44.016323 10 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0615 03:20:44.016395 10 flags.go:64] FLAG: --feature-gates=\"\"\nI0615 03:20:44.016482 10 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0615 03:20:44.016499 10 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0615 03:20:44.016583 10 flags.go:64] FLAG: --help=\"false\"\nI0615 03:20:44.016663 10 flags.go:64] FLAG: --hostname-override=\"i-0a5092cc559ae3bff\"\nI0615 03:20:44.016686 10 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0615 03:20:44.016803 10 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0615 03:20:44.016845 10 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0615 03:20:44.016861 10 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0615 03:20:44.016881 10 flags.go:64] FLAG: --ipvs-min-sync-period=\"0s\"\nI0615 03:20:44.016918 10 flags.go:64] FLAG: --ipvs-scheduler=\"\"\nI0615 03:20:44.016940 10 flags.go:64] FLAG: --ipvs-strict-arp=\"false\"\nI0615 03:20:44.016955 10 flags.go:64] FLAG: --ipvs-sync-period=\"30s\"\nI0615 03:20:44.016969 10 flags.go:64] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0615 03:20:44.016982 10 flags.go:64] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0615 03:20:44.016996 10 flags.go:64] FLAG: --ipvs-udp-timeout=\"0s\"\nI0615 03:20:44.017019 10 flags.go:64] FLAG: --kube-api-burst=\"10\"\nI0615 03:20:44.017040 10 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0615 03:20:44.017056 10 flags.go:64] FLAG: --kube-api-qps=\"5\"\nI0615 03:20:44.017091 10 flags.go:64] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0615 03:20:44.017123 10 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0615 03:20:44.017152 10 flags.go:64] FLAG: --log-dir=\"\"\nI0615 03:20:44.017168 10 flags.go:64] FLAG: --log-file=\"\"\nI0615 03:20:44.017182 10 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0615 03:20:44.017197 10 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0615 03:20:44.017210 10 flags.go:64] FLAG: --logtostderr=\"true\"\nI0615 03:20:44.017235 10 flags.go:64] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0615 03:20:44.017257 10 flags.go:64] FLAG: --masquerade-all=\"false\"\nI0615 03:20:44.017486 10 flags.go:64] FLAG: --master=\"https://api.internal.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io\"\nI0615 03:20:44.017513 10 flags.go:64] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0615 03:20:44.017528 10 flags.go:64] FLAG: --metrics-port=\"10249\"\nI0615 03:20:44.017613 10 flags.go:64] FLAG: --nodeport-addresses=\"[]\"\nI0615 03:20:44.017694 10 flags.go:64] FLAG: --one-output=\"false\"\nI0615 03:20:44.017712 10 flags.go:64] FLAG: --oom-score-adj=\"-998\"\nI0615 03:20:44.017791 10 flags.go:64] FLAG: --pod-bridge-interface=\"\"\nI0615 03:20:44.017814 10 flags.go:64] FLAG: --pod-interface-name-prefix=\"\"\nI0615 03:20:44.017892 10 flags.go:64] FLAG: --profiling=\"false\"\nI0615 03:20:44.017966 10 flags.go:64] FLAG: --proxy-mode=\"\"\nI0615 03:20:44.017984 10 flags.go:64] FLAG: --proxy-port-range=\"\"\nI0615 03:20:44.018064 10 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0615 03:20:44.018143 10 flags.go:64] FLAG: --skip-headers=\"false\"\nI0615 03:20:44.018166 10 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0615 03:20:44.018282 10 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0615 03:20:44.018323 10 flags.go:64] FLAG: --udp-timeout=\"250ms\"\nI0615 03:20:44.018340 10 flags.go:64] FLAG: --v=\"2\"\nI0615 03:20:44.018354 10 flags.go:64] FLAG: --version=\"false\"\nI0615 03:20:44.018370 10 flags.go:64] FLAG: --vmodule=\"\"\nI0615 03:20:44.018395 10 flags.go:64] FLAG: --write-config-to=\"\"\nI0615 03:20:44.018453 10 server.go:231] \"Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP\"\nI0615 03:20:44.018713 10 feature_gate.go:245] feature gates: &{map[]}\nI0615 03:20:44.019237 10 feature_gate.go:245] feature gates: &{map[]}\nE0615 03:21:14.065958 10 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/nodes/i-0a5092cc559ae3bff\": dial tcp 203.0.113.123:443: i/o timeout\nI0615 03:21:15.142323 10 node.go:163] Successfully retrieved node IP: 172.20.40.235\nI0615 03:21:15.142354 10 server_others.go:138] \"Detected node IP\" address=\"172.20.40.235\"\nI0615 03:21:15.142406 10 server_others.go:578] \"Unknown proxy mode, assuming iptables proxy\" proxyMode=\"\"\nI0615 03:21:15.142519 10 server_others.go:175] \"DetectLocalMode\" LocalMode=\"ClusterCIDR\"\nI0615 03:21:15.195944 10 server_others.go:206] \"Using iptables Proxier\"\nI0615 03:21:15.195980 10 server_others.go:213] \"kube-proxy running in dual-stack mode\" ipFamily=IPv4\nI0615 03:21:15.195988 10 server_others.go:214] \"Creating dualStackProxier for iptables\"\nI0615 03:21:15.195995 10 server_others.go:485] \"Detect-local-mode set to ClusterCIDR, but no cluster CIDR defined\"\nI0615 03:21:15.196000 10 server_others.go:541] \"Defaulting to no-op detect-local\" detect-local-mode=\"ClusterCIDR\"\nI0615 03:21:15.196024 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0615 03:21:15.196085 10 utils.go:431] \"Changed sysctl\" name=\"net/ipv4/conf/all/route_localnet\" before=0 after=1\nI0615 03:21:15.196618 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0615 03:21:15.196647 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0615 03:21:15.196682 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0615 03:21:15.196691 10 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0615 03:21:15.196736 10 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0615 03:21:15.196760 10 proxier.go:319] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0615 03:21:15.196776 10 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0615 03:21:15.196926 10 server.go:661] \"Version info\" version=\"v1.24.1\"\nI0615 03:21:15.196940 10 server.go:663] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0615 03:21:15.201285 10 conntrack.go:52] \"Setting nf_conntrack_max\" nf_conntrack_max=262144\nI0615 03:21:15.202066 10 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_tcp_timeout_close_wait\" value=3600\nI0615 03:21:15.202275 10 config.go:317] \"Starting service config controller\"\nI0615 03:21:15.202301 10 shared_informer.go:255] Waiting for caches to sync for service config\nI0615 03:21:15.202327 10 config.go:226] \"Starting endpoint slice config controller\"\nI0615 03:21:15.202333 10 shared_informer.go:255] Waiting for caches to sync for endpoint slice config\nI0615 03:21:15.204948 10 config.go:444] \"Starting node config controller\"\nI0615 03:21:15.204961 10 shared_informer.go:255] Waiting for caches to sync for node config\nI0615 03:21:15.207733 10 service.go:322] \"Service updated ports\" service=\"default/kubernetes\" portCount=1\nI0615 03:21:15.212508 10 service.go:322] \"Service updated ports\" service=\"kube-system/kube-dns\" portCount=3\nI0615 03:21:15.216793 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:15.217810 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:15.302482 10 shared_informer.go:262] Caches are synced for endpoint slice config\nI0615 03:21:15.302482 10 shared_informer.go:262] Caches are synced for service config\nI0615 03:21:15.302660 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:15.302685 10 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:15.302834 10 service.go:437] \"Adding new service port\" portName=\"default/kubernetes:https\" servicePort=\"172.20.0.1:443/TCP\"\nI0615 03:21:15.302860 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns\" servicePort=\"172.20.0.10:53/UDP\"\nI0615 03:21:15.302906 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns-tcp\" servicePort=\"172.20.0.10:53/TCP\"\nI0615 03:21:15.302920 10 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:metrics\" servicePort=\"172.20.0.10:9153/TCP\"\nI0615 03:21:15.303008 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:15.305125 10 shared_informer.go:262] Caches are synced for node config\nI0615 03:21:15.348794 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=9\nI0615 03:21:15.373797 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.054947ms\"\nI0615 03:21:15.373827 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:15.407252 10 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:21:15.408935 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.113439ms\"\nI0615 03:21:56.776239 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:56.807596 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=9\nI0615 03:21:56.811306 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.129018ms\"\nI0615 03:21:57.782969 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns:dns\" clusterIP=\"172.20.0.10\"\nI0615 03:21:57.782989 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:57.814576 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=21\nI0615 03:21:57.828471 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.598684ms\"\nI0615 03:22:01.726983 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:22:01.751960 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=21\nI0615 03:22:01.755051 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.147755ms\"\nI0615 03:22:01.774890 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:22:01.800747 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=21\nI0615 03:22:01.804060 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.183656ms\"\nI0615 03:22:01.804101 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:22:01.825820 10 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:22:01.827544 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"23.437864ms\"\nI0615 03:22:02.732195 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:22:02.759108 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:22:02.769700 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.658002ms\"\nI0615 03:24:56.341244 10 service.go:322] \"Service updated ports\" service=\"services-1407/no-pods\" portCount=1\nI0615 03:24:56.341326 10 service.go:437] \"Adding new service port\" portName=\"services-1407/no-pods\" servicePort=\"172.20.17.176:80/TCP\"\nI0615 03:24:56.341343 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:56.369756 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:24:56.373563 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.275317ms\"\nI0615 03:24:56.385728 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:56.419062 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:24:56.429452 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.738216ms\"\nI0615 03:24:57.825367 10 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-replica\" portCount=1\nI0615 03:24:57.825411 10 service.go:437] \"Adding new service port\" portName=\"kubectl-2153/agnhost-replica\" servicePort=\"172.20.7.165:6379/TCP\"\nI0615 03:24:57.825436 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:57.852366 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:24:57.856477 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.070031ms\"\nI0615 03:24:58.856759 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:58.898321 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:24:58.906584 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.87534ms\"\nI0615 03:24:59.704902 10 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-primary\" portCount=1\nI0615 03:24:59.704953 10 service.go:437] \"Adding new service port\" portName=\"kubectl-2153/agnhost-primary\" servicePort=\"172.20.0.118:6379/TCP\"\nI0615 03:24:59.704980 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:59.729724 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=30\nI0615 03:24:59.733255 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.307419ms\"\nI0615 03:25:00.461297 10 service.go:322] \"Service updated ports\" service=\"kubectl-2153/frontend\" portCount=1\nI0615 03:25:00.461361 10 service.go:437] \"Adding new service port\" portName=\"kubectl-2153/frontend\" servicePort=\"172.20.8.106:80/TCP\"\nI0615 03:25:00.461395 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:00.498204 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=30\nI0615 03:25:00.503484 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.130742ms\"\nI0615 03:25:01.504615 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:01.538370 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=30\nI0615 03:25:01.542336 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.77867ms\"\nI0615 03:25:03.101654 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:03.125944 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=34\nI0615 03:25:03.129605 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.984125ms\"\nI0615 03:25:05.740182 10 service.go:322] \"Service updated ports\" service=\"webhook-2310/e2e-test-webhook\" portCount=1\nI0615 03:25:05.740237 10 service.go:437] \"Adding new service port\" portName=\"webhook-2310/e2e-test-webhook\" servicePort=\"172.20.26.53:8443/TCP\"\nI0615 03:25:05.740269 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:05.817765 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=34\nI0615 03:25:05.827304 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"87.074424ms\"\nI0615 03:25:05.827439 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:05.914297 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=38\nI0615 03:25:05.956755 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"129.353744ms\"\nI0615 03:25:06.956994 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:06.981605 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=42\nI0615 03:25:06.985141 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.204107ms\"\nI0615 03:25:08.699333 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:08.732405 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=45\nI0615 03:25:08.736760 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.477783ms\"\nI0615 03:25:09.701519 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:09.735363 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=49\nI0615 03:25:09.739339 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.878551ms\"\nI0615 03:25:13.572740 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:13.597306 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=49\nI0615 03:25:13.601328 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.613649ms\"\nI0615 03:25:13.616167 10 service.go:322] \"Service updated ports\" service=\"services-1407/no-pods\" portCount=0\nI0615 03:25:13.616209 10 service.go:462] \"Removing service port\" portName=\"services-1407/no-pods\"\nI0615 03:25:13.616235 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:13.642654 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=49\nI0615 03:25:13.647268 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.058331ms\"\nI0615 03:25:15.402053 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:15.431363 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=52\nI0615 03:25:15.436228 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.263837ms\"\nI0615 03:25:15.439020 10 service.go:322] \"Service updated ports\" service=\"webhook-2310/e2e-test-webhook\" portCount=0\nI0615 03:25:16.400116 10 service.go:462] \"Removing service port\" portName=\"webhook-2310/e2e-test-webhook\"\nI0615 03:25:16.400182 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:16.426509 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=53\nI0615 03:25:16.430661 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.555657ms\"\nI0615 03:25:17.893204 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:17.922643 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=51\nI0615 03:25:17.927347 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.188949ms\"\nI0615 03:25:17.927572 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:17.950290 10 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:25:17.952469 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"24.89646ms\"\nI0615 03:25:18.847866 10 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-replica\" portCount=0\nI0615 03:25:18.847910 10 service.go:462] \"Removing service port\" portName=\"kubectl-2153/agnhost-replica\"\nI0615 03:25:18.847938 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:18.874672 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=47\nI0615 03:25:18.878875 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.961488ms\"\nI0615 03:25:18.878967 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:18.906930 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:25:18.910896 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.98094ms\"\nI0615 03:25:19.520600 10 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-primary\" portCount=0\nI0615 03:25:19.719110 10 service.go:322] \"Service updated ports\" service=\"webhook-2293/e2e-test-webhook\" portCount=1\nI0615 03:25:19.911850 10 service.go:462] \"Removing service port\" portName=\"kubectl-2153/agnhost-primary\"\nI0615 03:25:19.911891 10 service.go:437] \"Adding new service port\" portName=\"webhook-2293/e2e-test-webhook\" servicePort=\"172.20.19.169:8443/TCP\"\nI0615 03:25:19.911941 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:19.936719 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=46\nI0615 03:25:19.940747 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.915822ms\"\nI0615 03:25:20.200040 10 service.go:322] \"Service updated ports\" service=\"kubectl-2153/frontend\" portCount=0\nI0615 03:25:20.941106 10 service.go:462] \"Removing service port\" portName=\"kubectl-2153/frontend\"\nI0615 03:25:20.941236 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:20.977515 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=38\nI0615 03:25:20.982119 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.027725ms\"\nI0615 03:25:22.070643 10 service.go:322] \"Service updated ports\" service=\"webhook-2293/e2e-test-webhook\" portCount=0\nI0615 03:25:22.070677 10 service.go:462] \"Removing service port\" portName=\"webhook-2293/e2e-test-webhook\"\nI0615 03:25:22.070697 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:22.114735 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=32\nI0615 03:25:22.118991 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.306821ms\"\nI0615 03:25:23.119282 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:23.145823 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:25:23.149418 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.255108ms\"\nI0615 03:25:25.185479 10 service.go:322] \"Service updated ports\" service=\"services-477/sourceip-test\" portCount=1\nI0615 03:25:25.185532 10 service.go:437] \"Adding new service port\" portName=\"services-477/sourceip-test\" servicePort=\"172.20.2.117:8080/TCP\"\nI0615 03:25:25.185557 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:25.232184 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:25:25.242501 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.968704ms\"\nI0615 03:25:25.244227 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:25.308354 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:25:25.339603 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"95.399862ms\"\nI0615 03:25:28.599814 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingtdhbm\"\nI0615 03:25:28.744950 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingj995w\"\nI0615 03:25:28.889153 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:29.760636 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:30.048499 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:30.192551 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:30.624380 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingj995w\"\nI0615 03:25:30.626581 10 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingtdhbm\"\nI0615 03:25:33.535507 10 service.go:322] \"Service updated ports\" service=\"services-6951/nodeport-service\" portCount=1\nI0615 03:25:33.535553 10 service.go:437] \"Adding new service port\" portName=\"services-6951/nodeport-service\" servicePort=\"172.20.12.209:80/TCP\"\nI0615 03:25:33.535581 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:33.559986 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=30\nI0615 03:25:33.564316 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.765357ms\"\nI0615 03:25:33.564535 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:33.590339 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=30\nI0615 03:25:33.594633 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.149974ms\"\nI0615 03:25:33.689919 10 service.go:322] \"Service updated ports\" service=\"services-6951/externalsvc\" portCount=1\nI0615 03:25:34.595643 10 service.go:437] \"Adding new service port\" portName=\"services-6951/externalsvc\" servicePort=\"172.20.7.95:80/TCP\"\nI0615 03:25:34.595696 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:34.622012 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=30\nI0615 03:25:34.628999 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.426653ms\"\nI0615 03:25:35.629311 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:35.680398 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=34\nI0615 03:25:35.686478 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"57.245644ms\"\nI0615 03:25:37.552024 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:37.576513 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=38\nI0615 03:25:37.580038 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.05816ms\"\nI0615 03:25:38.267104 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:38.296063 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:38.301545 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.497046ms\"\nI0615 03:25:39.425316 10 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-a-x8swd\" portCount=1\nI0615 03:25:39.425368 10 service.go:437] \"Adding new service port\" portName=\"services-6734/e2e-svc-a-x8swd:http\" servicePort=\"172.20.14.127:8001/TCP\"\nI0615 03:25:39.425398 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:39.453105 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=20 numNATRules=41\nI0615 03:25:39.457100 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.737423ms\"\nI0615 03:25:39.574569 10 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-b-mv67m\" portCount=1\nI0615 03:25:39.574623 10 service.go:437] \"Adding new service port\" portName=\"services-6734/e2e-svc-b-mv67m:http\" servicePort=\"172.20.5.204:8002/TCP\"\nI0615 03:25:39.574652 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:39.605174 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=20 numNATRules=41\nI0615 03:25:39.609410 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.792703ms\"\nI0615 03:25:39.721333 10 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-c-pw7db\" portCount=1\nI0615 03:25:40.010735 10 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-a-x8swd\" portCount=0\nI0615 03:25:40.016584 10 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-b-mv67m\" portCount=0\nI0615 03:25:40.426377 10 service.go:322] \"Service updated ports\" service=\"services-6951/nodeport-service\" portCount=0\nI0615 03:25:40.610601 10 service.go:437] \"Adding new service port\" portName=\"services-6734/e2e-svc-c-pw7db:http\" servicePort=\"172.20.29.237:8003/TCP\"\nI0615 03:25:40.610629 10 service.go:462] \"Removing service port\" portName=\"services-6734/e2e-svc-a-x8swd:http\"\nI0615 03:25:40.610649 10 service.go:462] \"Removing service port\" portName=\"services-6734/e2e-svc-b-mv67m:http\"\nI0615 03:25:40.610656 10 service.go:462] \"Removing service port\" portName=\"services-6951/nodeport-service\"\nI0615 03:25:40.610684 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:40.635381 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=41\nI0615 03:25:40.639592 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.013844ms\"\nI0615 03:25:42.774503 10 service.go:322] \"Service updated ports\" service=\"dns-7072/test-service-2\" portCount=1\nI0615 03:25:42.774562 10 service.go:437] \"Adding new service port\" portName=\"dns-7072/test-service-2:http\" servicePort=\"172.20.20.14:80/TCP\"\nI0615 03:25:42.774591 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:42.800703 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:42.805615 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.061156ms\"\nI0615 03:25:42.805674 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:42.832700 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:42.836991 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.336184ms\"\nI0615 03:25:45.532531 10 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-c-pw7db\" portCount=0\nI0615 03:25:45.532574 10 service.go:462] \"Removing service port\" portName=\"services-6734/e2e-svc-c-pw7db:http\"\nI0615 03:25:45.532711 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:45.561172 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=41\nI0615 03:25:45.565003 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.427532ms\"\nI0615 03:25:45.696436 10 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service\" portCount=1\nI0615 03:25:45.696492 10 service.go:437] \"Adding new service port\" portName=\"resourcequota-8381/test-service\" servicePort=\"172.20.15.254:80/TCP\"\nI0615 03:25:45.696522 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:45.723756 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:45.728112 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.628713ms\"\nI0615 03:25:45.848834 10 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service-np\" portCount=1\nI0615 03:25:46.728661 10 service.go:437] \"Adding new service port\" portName=\"resourcequota-8381/test-service-np\" servicePort=\"172.20.21.180:80/TCP\"\nI0615 03:25:46.728720 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:46.757476 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=20 numNATRules=41\nI0615 03:25:46.762189 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.499199ms\"\nI0615 03:25:48.298625 10 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service\" portCount=0\nI0615 03:25:48.298657 10 service.go:462] \"Removing service port\" portName=\"resourcequota-8381/test-service\"\nI0615 03:25:48.298686 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:48.335791 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=20 numNATRules=41\nI0615 03:25:48.341770 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.031013ms\"\nI0615 03:25:48.457869 10 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service-np\" portCount=0\nI0615 03:25:48.848088 10 service.go:462] \"Removing service port\" portName=\"resourcequota-8381/test-service-np\"\nI0615 03:25:48.848205 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:48.890928 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=39\nI0615 03:25:48.896867 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.433341ms\"\nI0615 03:25:49.897830 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:49.924720 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=36\nI0615 03:25:49.928289 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.524471ms\"\nI0615 03:25:50.968533 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:50.996224 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:50.999623 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.146995ms\"\nI0615 03:25:51.999828 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:52.027945 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:52.031506 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.752995ms\"\nI0615 03:25:53.866142 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:53.890209 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:53.894092 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.008766ms\"\nI0615 03:25:54.067951 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:54.095665 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:54.100417 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.512337ms\"\nI0615 03:25:54.243717 10 service.go:322] \"Service updated ports\" service=\"services-6951/externalsvc\" portCount=0\nI0615 03:25:55.100949 10 service.go:462] \"Removing service port\" portName=\"services-6951/externalsvc\"\nI0615 03:25:55.100996 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:55.126793 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:25:55.130341 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.418811ms\"\nI0615 03:25:57.816971 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:57.846717 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:25:57.852370 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.441676ms\"\nI0615 03:26:00.029363 10 service.go:322] \"Service updated ports\" service=\"webhook-8409/e2e-test-webhook\" portCount=1\nI0615 03:26:00.029419 10 service.go:437] \"Adding new service port\" portName=\"webhook-8409/e2e-test-webhook\" servicePort=\"172.20.7.255:8443/TCP\"\nI0615 03:26:00.029450 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:00.059610 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=38\nI0615 03:26:00.065469 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.053466ms\"\nI0615 03:26:00.065678 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:00.098298 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=42\nI0615 03:26:00.101701 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.070823ms\"\nI0615 03:26:00.457676 10 service.go:322] \"Service updated ports\" service=\"services-477/sourceip-test\" portCount=0\nI0615 03:26:01.101876 10 service.go:462] \"Removing service port\" portName=\"services-477/sourceip-test\"\nI0615 03:26:01.101949 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:01.178869 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=40\nI0615 03:26:01.197251 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"95.387724ms\"\nI0615 03:26:02.570371 10 service.go:322] \"Service updated ports\" service=\"kubectl-4780/agnhost-primary\" portCount=1\nI0615 03:26:02.570591 10 service.go:437] \"Adding new service port\" portName=\"kubectl-4780/agnhost-primary\" servicePort=\"172.20.13.94:6379/TCP\"\nI0615 03:26:02.570635 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:02.616507 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=38\nI0615 03:26:02.621862 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.4411ms\"\nI0615 03:26:02.640116 10 service.go:322] \"Service updated ports\" service=\"webhook-8409/e2e-test-webhook\" portCount=0\nI0615 03:26:03.622039 10 service.go:462] \"Removing service port\" portName=\"webhook-8409/e2e-test-webhook\"\nI0615 03:26:03.622137 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:03.671505 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:26:03.675716 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.70538ms\"\nI0615 03:26:07.269842 10 service.go:322] \"Service updated ports\" service=\"services-2131/endpoint-test2\" portCount=1\nI0615 03:26:07.269948 10 service.go:437] \"Adding new service port\" portName=\"services-2131/endpoint-test2\" servicePort=\"172.20.20.172:80/TCP\"\nI0615 03:26:07.269992 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:07.304942 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:26:07.319649 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.78019ms\"\nI0615 03:26:07.319724 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:07.377531 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:26:07.380906 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.210294ms\"\nI0615 03:26:09.836534 10 service.go:322] \"Service updated ports\" service=\"kubectl-4780/agnhost-primary\" portCount=0\nI0615 03:26:09.836592 10 service.go:462] \"Removing service port\" portName=\"kubectl-4780/agnhost-primary\"\nI0615 03:26:09.836623 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:09.863461 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:26:09.867347 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.754295ms\"\nI0615 03:26:09.889236 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:09.922725 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:26:09.926900 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.697737ms\"\nI0615 03:26:10.929312 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:10.970656 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:26:10.980052 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.825722ms\"\nI0615 03:26:19.250901 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:19.275767 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:26:19.279506 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.646255ms\"\nI0615 03:26:19.387221 10 service.go:322] \"Service updated ports\" service=\"dns-7072/test-service-2\" portCount=0\nI0615 03:26:19.387266 10 service.go:462] \"Removing service port\" portName=\"dns-7072/test-service-2:http\"\nI0615 03:26:19.387297 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:19.411062 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:19.414225 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"26.958753ms\"\nI0615 03:26:20.414968 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:20.463041 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:20.469117 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.229398ms\"\nI0615 03:26:24.260129 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:24.337832 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=37\nI0615 03:26:24.357595 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"97.497129ms\"\nI0615 03:26:29.003316 10 service.go:322] \"Service updated ports\" service=\"webhook-1668/e2e-test-webhook\" portCount=1\nI0615 03:26:29.003540 10 service.go:437] \"Adding new service port\" portName=\"webhook-1668/e2e-test-webhook\" servicePort=\"172.20.19.166:8443/TCP\"\nI0615 03:26:29.003573 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:29.030891 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=18 numNATRules=37\nI0615 03:26:29.036005 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.476326ms\"\nI0615 03:26:29.036066 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:29.063581 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=41\nI0615 03:26:29.067326 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.289951ms\"\nI0615 03:26:29.446763 10 service.go:322] \"Service updated ports\" service=\"services-6962/tolerate-unready\" portCount=1\nI0615 03:26:30.068012 10 service.go:437] \"Adding new service port\" portName=\"services-6962/tolerate-unready:http\" servicePort=\"172.20.24.82:80/TCP\"\nI0615 03:26:30.068241 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:30.100274 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=41\nI0615 03:26:30.104212 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.228069ms\"\nI0615 03:26:31.105256 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:31.131859 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=39\nI0615 03:26:31.135791 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.601218ms\"\nI0615 03:26:34.479936 10 service.go:322] \"Service updated ports\" service=\"webhook-1668/e2e-test-webhook\" portCount=0\nI0615 03:26:34.479982 10 service.go:462] \"Removing service port\" portName=\"webhook-1668/e2e-test-webhook\"\nI0615 03:26:34.480011 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:34.538146 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:26:34.545871 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.883647ms\"\nI0615 03:26:34.545944 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:34.581020 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:26:34.584681 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.771736ms\"\nI0615 03:26:35.585384 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:35.621711 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=32\nI0615 03:26:35.625719 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.497583ms\"\nI0615 03:26:36.020416 10 service.go:322] \"Service updated ports\" service=\"services-2131/endpoint-test2\" portCount=0\nI0615 03:26:36.626727 10 service.go:462] \"Removing service port\" portName=\"services-2131/endpoint-test2\"\nI0615 03:26:36.626809 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:36.660214 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:26:36.663632 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.912222ms\"\nI0615 03:26:40.014843 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:40.040773 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:40.050780 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.976673ms\"\nI0615 03:26:53.102884 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:53.131479 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:53.134922 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.101256ms\"\nI0615 03:26:53.527312 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:53.575363 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=32\nI0615 03:26:53.580065 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.791764ms\"\nI0615 03:26:55.268943 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:55.294628 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:55.298272 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.366025ms\"\nI0615 03:26:57.027746 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:57.074196 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=32\nI0615 03:26:57.080572 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.855034ms\"\nI0615 03:26:57.600682 10 service.go:322] \"Service updated ports\" service=\"services-6962/tolerate-unready\" portCount=0\nI0615 03:26:57.600725 10 service.go:462] \"Removing service port\" portName=\"services-6962/tolerate-unready:http\"\nI0615 03:26:57.600756 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:57.660000 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:26:57.665058 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.325017ms\"\nI0615 03:26:58.665824 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:58.689483 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:26:58.692474 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"26.698944ms\"\nI0615 03:27:02.763476 10 service.go:322] \"Service updated ports\" service=\"services-3265/nodeport-collision-1\" portCount=1\nI0615 03:27:02.763526 10 service.go:437] \"Adding new service port\" portName=\"services-3265/nodeport-collision-1\" servicePort=\"172.20.2.226:80/TCP\"\nI0615 03:27:02.763554 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:02.792490 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:27:02.796616 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.094532ms\"\nI0615 03:27:02.796661 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:02.833631 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:27:02.837576 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.926941ms\"\nI0615 03:27:03.062867 10 service.go:322] \"Service updated ports\" service=\"services-3265/nodeport-collision-1\" portCount=0\nI0615 03:27:03.240322 10 service.go:322] \"Service updated ports\" service=\"services-3265/nodeport-collision-2\" portCount=1\nI0615 03:27:03.837921 10 service.go:462] \"Removing service port\" portName=\"services-3265/nodeport-collision-1\"\nI0615 03:27:03.837982 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:03.863252 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:27:03.867237 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.328106ms\"\nI0615 03:27:06.870935 10 service.go:322] \"Service updated ports\" service=\"kubectl-3933/agnhost-primary\" portCount=1\nI0615 03:27:06.870987 10 service.go:437] \"Adding new service port\" portName=\"kubectl-3933/agnhost-primary\" servicePort=\"172.20.22.55:6379/TCP\"\nI0615 03:27:06.871017 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:06.899873 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:27:06.904587 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.607809ms\"\nI0615 03:27:06.904648 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:06.934696 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:27:06.939784 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.159315ms\"\nI0615 03:27:07.371058 10 service.go:322] \"Service updated ports\" service=\"dns-4197/test-service-2\" portCount=1\nI0615 03:27:07.940074 10 service.go:437] \"Adding new service port\" portName=\"dns-4197/test-service-2:http\" servicePort=\"172.20.30.40:80/TCP\"\nI0615 03:27:07.940121 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:07.968393 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:27:07.972210 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.188465ms\"\nI0615 03:27:12.220811 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:12.247441 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:27:12.251354 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.518027ms\"\nI0615 03:27:18.424541 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:18.477452 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:27:18.487552 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.053504ms\"\nI0615 03:27:23.996290 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:24.033696 10 service.go:322] \"Service updated ports\" service=\"kubectl-3933/agnhost-primary\" portCount=0\nI0615 03:27:24.037299 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:27:24.042112 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.855092ms\"\nI0615 03:27:24.042155 10 service.go:462] \"Removing service port\" portName=\"kubectl-3933/agnhost-primary\"\nI0615 03:27:24.042194 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:24.110039 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:27:24.118445 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"76.287509ms\"\nI0615 03:27:32.967600 10 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=1\nI0615 03:27:32.967650 10 service.go:437] \"Adding new service port\" portName=\"services-8063/nodeport-update-service\" servicePort=\"172.20.25.229:80/TCP\"\nI0615 03:27:32.967672 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:33.035554 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:27:33.048713 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.066259ms\"\nI0615 03:27:33.049225 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:33.134949 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:27:33.141043 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.279887ms\"\nI0615 03:27:33.263920 10 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=1\nI0615 03:27:34.141260 10 service.go:437] \"Adding new service port\" portName=\"services-8063/nodeport-update-service:tcp-port\" servicePort=\"172.20.25.229:80/TCP\"\nI0615 03:27:34.141291 10 service.go:462] \"Removing service port\" portName=\"services-8063/nodeport-update-service\"\nI0615 03:27:34.141321 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:34.164670 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:27:34.168536 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.311624ms\"\nI0615 03:27:35.168865 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:35.192531 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=41\nI0615 03:27:35.196789 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.982776ms\"\nI0615 03:27:37.114022 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:37.152498 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:27:37.156472 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.513329ms\"\nI0615 03:27:38.964773 10 service.go:322] \"Service updated ports\" service=\"pods-1526/fooservice\" portCount=1\nI0615 03:27:38.964821 10 service.go:437] \"Adding new service port\" portName=\"pods-1526/fooservice\" servicePort=\"172.20.20.159:8765/TCP\"\nI0615 03:27:38.964852 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:38.995320 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:27:38.999510 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.694037ms\"\nI0615 03:27:38.999780 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:39.026662 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=48\nI0615 03:27:39.030333 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.608346ms\"\nI0615 03:27:47.067648 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:47.096170 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=46\nI0615 03:27:47.100680 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.061211ms\"\nI0615 03:27:47.100764 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:47.127582 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:27:47.131139 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.422891ms\"\nI0615 03:27:47.205452 10 service.go:322] \"Service updated ports\" service=\"dns-4197/test-service-2\" portCount=0\nI0615 03:27:48.131742 10 service.go:462] \"Removing service port\" portName=\"dns-4197/test-service-2:http\"\nI0615 03:27:48.131813 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:48.156609 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:27:48.160341 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.615142ms\"\nI0615 03:27:49.555865 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:49.588566 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=42\nI0615 03:27:49.592918 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.093339ms\"\nI0615 03:27:49.660348 10 service.go:322] \"Service updated ports\" service=\"pods-1526/fooservice\" portCount=0\nI0615 03:27:50.593696 10 service.go:462] \"Removing service port\" portName=\"pods-1526/fooservice\"\nI0615 03:27:50.593780 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:50.625062 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=40\nI0615 03:27:50.628408 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.735604ms\"\nI0615 03:27:52.035675 10 service.go:322] \"Service updated ports\" service=\"webhook-9884/e2e-test-webhook\" portCount=1\nI0615 03:27:52.035724 10 service.go:437] \"Adding new service port\" portName=\"webhook-9884/e2e-test-webhook\" servicePort=\"172.20.11.212:8443/TCP\"\nI0615 03:27:52.035755 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:52.078361 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=40\nI0615 03:27:52.085372 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.64787ms\"\nI0615 03:27:52.085522 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:52.121398 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:27:52.127048 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.588108ms\"\nI0615 03:27:54.359893 10 service.go:322] \"Service updated ports\" service=\"webhook-9884/e2e-test-webhook\" portCount=0\nI0615 03:27:54.359938 10 service.go:462] \"Removing service port\" portName=\"webhook-9884/e2e-test-webhook\"\nI0615 03:27:54.359970 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:54.384267 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=42\nI0615 03:27:54.388194 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.258392ms\"\nI0615 03:27:54.388282 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:54.412413 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=40\nI0615 03:27:54.416253 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.014606ms\"\nI0615 03:28:01.646793 10 service.go:322] \"Service updated ports\" service=\"services-1772/affinity-nodeport\" portCount=1\nI0615 03:28:01.646844 10 service.go:437] \"Adding new service port\" portName=\"services-1772/affinity-nodeport\" servicePort=\"172.20.25.152:80/TCP\"\nI0615 03:28:01.646870 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:01.698407 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=40\nI0615 03:28:01.710864 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.024151ms\"\nI0615 03:28:01.710917 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:01.800202 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=40\nI0615 03:28:01.809596 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"98.690084ms\"\nI0615 03:28:04.005500 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:04.036216 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=48\nI0615 03:28:04.047031 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.575671ms\"\nI0615 03:28:05.001884 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:05.026588 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=52\nI0615 03:28:05.030370 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.549827ms\"\nI0615 03:28:14.019721 10 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=2\nI0615 03:28:14.019765 10 service.go:439] \"Updating existing service port\" portName=\"services-8063/nodeport-update-service:tcp-port\" servicePort=\"172.20.25.229:80/TCP\"\nI0615 03:28:14.019775 10 service.go:437] \"Adding new service port\" portName=\"services-8063/nodeport-update-service:udp-port\" servicePort=\"172.20.25.229:80/UDP\"\nI0615 03:28:14.019799 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:14.050167 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=52\nI0615 03:28:14.056600 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.836886ms\"\nI0615 03:28:14.057031 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8063/nodeport-update-service:udp-port\" clusterIP=\"172.20.25.229\"\nI0615 03:28:14.057316 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8063/nodeport-update-service:udp-port\" nodePort=31598\nI0615 03:28:14.057520 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:14.090514 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=62\nI0615 03:28:14.124264 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.382241ms\"\nI0615 03:28:16.562583 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:16.595260 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=66\nI0615 03:28:16.599289 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.771491ms\"\nI0615 03:28:28.943669 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:28.996920 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=63\nI0615 03:28:29.004493 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"60.872083ms\"\nI0615 03:28:29.946900 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:29.974220 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=27 numNATRules=54\nI0615 03:28:29.978028 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.185325ms\"\nI0615 03:28:30.565751 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:30.607545 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:30.611671 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.978649ms\"\nI0615 03:28:31.023660 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:31.103862 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:31.107572 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"83.970752ms\"\nI0615 03:28:32.108666 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:32.137104 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:32.151981 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.377248ms\"\nI0615 03:28:32.782272 10 service.go:322] \"Service updated ports\" service=\"services-1772/affinity-nodeport\" portCount=0\nI0615 03:28:33.152135 10 service.go:462] \"Removing service port\" portName=\"services-1772/affinity-nodeport\"\nI0615 03:28:33.152362 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:33.180644 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:28:33.185221 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.104289ms\"\nI0615 03:28:38.612274 10 service.go:322] \"Service updated ports\" service=\"conntrack-5332/svc-udp\" portCount=1\nI0615 03:28:38.612532 10 service.go:437] \"Adding new service port\" portName=\"conntrack-5332/svc-udp:udp\" servicePort=\"172.20.26.215:80/UDP\"\nI0615 03:28:38.612652 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:38.637681 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:38.642424 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.903016ms\"\nI0615 03:28:38.642475 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:38.671228 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:38.675328 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.869686ms\"\nI0615 03:28:45.758187 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-5332/svc-udp:udp\" clusterIP=\"172.20.26.215\"\nI0615 03:28:45.758256 10 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-5332/svc-udp:udp\" nodePort=31411\nI0615 03:28:45.758264 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:45.784155 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=57\nI0615 03:28:45.799561 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.470687ms\"\nI0615 03:28:48.667585 10 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-int-port\" portCount=1\nI0615 03:28:48.667637 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-4880/example-int-port:example\" servicePort=\"172.20.13.140:80/TCP\"\nI0615 03:28:48.667670 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:48.701273 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=57\nI0615 03:28:48.706206 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.575756ms\"\nI0615 03:28:48.706273 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:48.743704 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=57\nI0615 03:28:48.748366 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.120833ms\"\nI0615 03:28:48.813776 10 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-named-port\" portCount=1\nI0615 03:28:48.970063 10 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-no-match\" portCount=1\nI0615 03:28:49.748730 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-4880/example-named-port:http\" servicePort=\"172.20.17.236:80/TCP\"\nI0615 03:28:49.748771 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-4880/example-no-match:example-no-match\" servicePort=\"172.20.4.63:80/TCP\"\nI0615 03:28:49.748814 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:49.774980 10 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=57\nI0615 03:28:49.781675 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.938713ms\"\nI0615 03:28:51.998304 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:52.026108 10 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=61\nI0615 03:28:52.029946 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.686814ms\"\nI0615 03:28:52.404956 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:52.517453 10 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=65\nI0615 03:28:52.531110 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"126.214445ms\"\nI0615 03:28:53.531631 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:53.569500 10 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=68\nI0615 03:28:53.574666 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.123423ms\"\nI0615 03:28:58.140215 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:58.192443 10 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=32 numNATRules=71\nI0615 03:28:58.199533 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.387952ms\"\nI0615 03:28:59.535268 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:59.567956 10 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=32 numNATRules=69\nI0615 03:28:59.587150 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.973292ms\"\nI0615 03:29:00.539672 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:00.573348 10 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=68\nI0615 03:29:00.580433 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.824993ms\"\nI0615 03:29:01.765376 10 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=0\nI0615 03:29:01.765796 10 service.go:462] \"Removing service port\" portName=\"services-8063/nodeport-update-service:tcp-port\"\nI0615 03:29:01.765831 10 service.go:462] \"Removing service port\" portName=\"services-8063/nodeport-update-service:udp-port\"\nI0615 03:29:01.765874 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:01.818243 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=56\nI0615 03:29:01.832258 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.47261ms\"\nI0615 03:29:01.832473 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:01.865351 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=48\nI0615 03:29:01.871337 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.967291ms\"\nI0615 03:29:10.433113 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:10.457415 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=46\nI0615 03:29:10.461249 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.202014ms\"\nI0615 03:29:10.580721 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:10.609966 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=43\nI0615 03:29:10.613951 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.294777ms\"\nI0615 03:29:11.435407 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:11.472365 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=45\nI0615 03:29:11.482905 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.531518ms\"\nI0615 03:29:12.349369 10 service.go:322] \"Service updated ports\" service=\"conntrack-5332/svc-udp\" portCount=0\nI0615 03:29:12.483571 10 service.go:462] \"Removing service port\" portName=\"conntrack-5332/svc-udp:udp\"\nI0615 03:29:12.483681 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:12.519693 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=44\nI0615 03:29:12.528429 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.893666ms\"\nI0615 03:29:19.615671 10 service.go:322] \"Service updated ports\" service=\"conntrack-6419/svc-udp\" portCount=1\nI0615 03:29:19.615729 10 service.go:437] \"Adding new service port\" portName=\"conntrack-6419/svc-udp:udp\" servicePort=\"172.20.22.210:80/UDP\"\nI0615 03:29:19.615764 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:19.644980 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:29:19.648900 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.179179ms\"\nI0615 03:29:19.648971 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:19.676257 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:29:19.680744 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.804902ms\"\nI0615 03:29:22.791120 10 service.go:322] \"Service updated ports\" service=\"webhook-3549/e2e-test-webhook\" portCount=1\nI0615 03:29:22.791176 10 service.go:437] \"Adding new service port\" portName=\"webhook-3549/e2e-test-webhook\" servicePort=\"172.20.4.113:8443/TCP\"\nI0615 03:29:22.794493 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:22.822173 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=20 numNATRules=41\nI0615 03:29:22.826934 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.762795ms\"\nI0615 03:29:22.827001 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:22.851696 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=45\nI0615 03:29:22.855218 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.253336ms\"\nI0615 03:29:26.410896 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:26.437802 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=22 numNATRules=43\nI0615 03:29:26.441229 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.387664ms\"\nI0615 03:29:26.441330 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:26.448146 10 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-int-port\" portCount=0\nI0615 03:29:26.456208 10 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-named-port\" portCount=0\nI0615 03:29:26.464524 10 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-no-match\" portCount=0\nI0615 03:29:26.470405 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=20 numNATRules=37\nI0615 03:29:26.475031 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.760468ms\"\nI0615 03:29:27.475573 10 service.go:462] \"Removing service port\" portName=\"endpointslice-4880/example-int-port:example\"\nI0615 03:29:27.475661 10 service.go:462] \"Removing service port\" portName=\"endpointslice-4880/example-named-port:http\"\nI0615 03:29:27.475683 10 service.go:462] \"Removing service port\" portName=\"endpointslice-4880/example-no-match:example-no-match\"\nI0615 03:29:27.475795 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6419/svc-udp:udp\" clusterIP=\"172.20.22.210\"\nI0615 03:29:27.475809 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:27.520113 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:29:27.530732 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.179512ms\"\nI0615 03:29:32.219363 10 service.go:322] \"Service updated ports\" service=\"webhook-490/e2e-test-webhook\" portCount=1\nI0615 03:29:32.219413 10 service.go:437] \"Adding new service port\" portName=\"webhook-490/e2e-test-webhook\" servicePort=\"172.20.21.225:8443/TCP\"\nI0615 03:29:32.219447 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:32.246358 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=38\nI0615 03:29:32.250313 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.903699ms\"\nI0615 03:29:32.250378 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:32.331333 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=42\nI0615 03:29:32.337942 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"87.590773ms\"\nI0615 03:29:36.579679 10 service.go:322] \"Service updated ports\" service=\"webhook-3549/e2e-test-webhook\" portCount=0\nI0615 03:29:36.579714 10 service.go:462] \"Removing service port\" portName=\"webhook-3549/e2e-test-webhook\"\nI0615 03:29:36.579743 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:36.619722 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=40\nI0615 03:29:36.631571 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.848614ms\"\nI0615 03:29:36.631681 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:36.666826 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:29:36.669930 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.318837ms\"\nI0615 03:29:37.193622 10 service.go:322] \"Service updated ports\" service=\"webhook-490/e2e-test-webhook\" portCount=0\nI0615 03:29:37.670846 10 service.go:462] \"Removing service port\" portName=\"webhook-490/e2e-test-webhook\"\nI0615 03:29:37.670932 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:37.705896 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=36\nI0615 03:29:37.711710 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.875281ms\"\nI0615 03:29:38.926761 10 service.go:322] \"Service updated ports\" service=\"services-2660/hairpin-test\" portCount=1\nI0615 03:29:38.926814 10 service.go:437] \"Adding new service port\" portName=\"services-2660/hairpin-test\" servicePort=\"172.20.9.144:8080/TCP\"\nI0615 03:29:38.926850 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:38.957137 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:29:38.960521 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.712813ms\"\nI0615 03:29:39.961224 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:40.031914 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:29:40.048169 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"87.003841ms\"\nI0615 03:29:41.086103 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:41.115040 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:29:41.119103 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.049621ms\"\nI0615 03:29:42.119507 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:42.153197 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=41\nI0615 03:29:42.157138 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.714423ms\"\nI0615 03:29:43.157524 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:43.192739 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=39\nI0615 03:29:43.203581 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.122903ms\"\nI0615 03:29:44.203842 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:44.228078 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:29:44.231212 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.461932ms\"\nI0615 03:29:46.827219 10 service.go:322] \"Service updated ports\" service=\"services-9220/affinity-nodeport-transition\" portCount=1\nI0615 03:29:46.827296 10 service.go:437] \"Adding new service port\" portName=\"services-9220/affinity-nodeport-transition\" servicePort=\"172.20.27.35:80/TCP\"\nI0615 03:29:46.827333 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:46.853992 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=38\nI0615 03:29:46.858218 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.930287ms\"\nI0615 03:29:46.858312 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:46.887519 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=38\nI0615 03:29:46.890921 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.637496ms\"\nI0615 03:29:48.054090 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:48.082352 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=46\nI0615 03:29:48.086235 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.206914ms\"\nI0615 03:29:48.860958 10 service.go:322] \"Service updated ports\" service=\"webhook-4925/e2e-test-webhook\" portCount=1\nI0615 03:29:48.861010 10 service.go:437] \"Adding new service port\" portName=\"webhook-4925/e2e-test-webhook\" servicePort=\"172.20.28.94:8443/TCP\"\nI0615 03:29:48.861051 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:48.886967 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:29:48.890393 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.3928ms\"\nI0615 03:29:49.891608 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:49.918499 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=50\nI0615 03:29:49.922112 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.558152ms\"\nI0615 03:29:50.922708 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:50.956510 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=58\nI0615 03:29:50.961431 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.032238ms\"\nI0615 03:29:51.238738 10 service.go:322] \"Service updated ports\" service=\"services-2660/hairpin-test\" portCount=0\nI0615 03:29:51.961709 10 service.go:462] \"Removing service port\" portName=\"services-2660/hairpin-test\"\nI0615 03:29:51.961796 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:51.992161 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=56\nI0615 03:29:51.995705 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.025792ms\"\nI0615 03:29:52.063614 10 service.go:322] \"Service updated ports\" service=\"webhook-4925/e2e-test-webhook\" portCount=0\nI0615 03:29:52.996125 10 service.go:462] \"Removing service port\" portName=\"webhook-4925/e2e-test-webhook\"\nI0615 03:29:52.996215 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:53.027424 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=52\nI0615 03:29:53.034931 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.823465ms\"\nI0615 03:30:00.437902 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:00.468298 10 service.go:322] \"Service updated ports\" service=\"conntrack-6419/svc-udp\" portCount=0\nI0615 03:30:00.522375 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=48\nI0615 03:30:00.546375 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"108.527992ms\"\nI0615 03:30:00.546495 10 service.go:462] \"Removing service port\" portName=\"conntrack-6419/svc-udp:udp\"\nI0615 03:30:00.546571 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:00.628953 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=46\nI0615 03:30:00.663910 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"117.49101ms\"\nI0615 03:30:03.321848 10 service.go:322] \"Service updated ports\" service=\"services-9220/affinity-nodeport-transition\" portCount=1\nI0615 03:30:03.321897 10 service.go:439] \"Updating existing service port\" portName=\"services-9220/affinity-nodeport-transition\" servicePort=\"172.20.27.35:80/TCP\"\nI0615 03:30:03.321946 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:03.347488 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=43\nI0615 03:30:03.352273 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.33046ms\"\nI0615 03:30:05.199763 10 service.go:322] \"Service updated ports\" service=\"services-9220/affinity-nodeport-transition\" portCount=1\nI0615 03:30:05.199951 10 service.go:439] \"Updating existing service port\" portName=\"services-9220/affinity-nodeport-transition\" servicePort=\"172.20.27.35:80/TCP\"\nI0615 03:30:05.200015 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:05.236949 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=46\nI0615 03:30:05.240618 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.81014ms\"\nI0615 03:30:07.424089 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:07.460302 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=43\nI0615 03:30:07.464769 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.762984ms\"\nI0615 03:30:08.432022 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:08.479918 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=34\nI0615 03:30:08.485578 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.802535ms\"\nI0615 03:30:08.774466 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:08.850726 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:08.854697 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"80.297842ms\"\nI0615 03:30:09.854934 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:09.879641 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:09.882776 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.931965ms\"\nI0615 03:30:11.391087 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:11.420826 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:11.426886 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.867367ms\"\nI0615 03:30:11.598246 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:11.633060 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:11.636633 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.45307ms\"\nI0615 03:30:13.226762 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:13.305885 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:13.313364 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"86.642609ms\"\nI0615 03:30:13.634802 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:13.662995 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:13.666314 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.555986ms\"\nI0615 03:30:13.831801 10 service.go:322] \"Service updated ports\" service=\"services-9220/affinity-nodeport-transition\" portCount=0\nI0615 03:30:13.854160 10 service.go:322] \"Service updated ports\" service=\"services-8364/nodeport-reuse\" portCount=1\nI0615 03:30:14.666533 10 service.go:462] \"Removing service port\" portName=\"services-9220/affinity-nodeport-transition\"\nI0615 03:30:14.666593 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:14.692208 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:30:14.696530 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.019526ms\"\nI0615 03:30:20.116889 10 service.go:322] \"Service updated ports\" service=\"services-8364/nodeport-reuse\" portCount=1\nI0615 03:30:20.116948 10 service.go:437] \"Adding new service port\" portName=\"services-8364/nodeport-reuse\" servicePort=\"172.20.23.172:80/TCP\"\nI0615 03:30:20.116986 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:20.163804 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:20.170335 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.39499ms\"\nI0615 03:30:20.170388 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:20.211944 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:20.218081 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.709799ms\"\nI0615 03:30:20.263592 10 service.go:322] \"Service updated ports\" service=\"services-8364/nodeport-reuse\" portCount=0\nI0615 03:30:21.218312 10 service.go:462] \"Removing service port\" portName=\"services-8364/nodeport-reuse\"\nI0615 03:30:21.218379 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:21.246626 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:30:21.251377 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.078349ms\"\nI0615 03:30:27.275721 10 service.go:322] \"Service updated ports\" service=\"webhook-2247/e2e-test-webhook\" portCount=1\nI0615 03:30:27.276013 10 service.go:437] \"Adding new service port\" portName=\"webhook-2247/e2e-test-webhook\" servicePort=\"172.20.31.12:8443/TCP\"\nI0615 03:30:27.276168 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:27.387066 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:30:27.398652 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"122.720116ms\"\nI0615 03:30:27.398720 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:27.453473 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:30:27.470219 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.526746ms\"\nI0615 03:30:29.599568 10 service.go:322] \"Service updated ports\" service=\"webhook-2247/e2e-test-webhook\" portCount=0\nI0615 03:30:29.599617 10 service.go:462] \"Removing service port\" portName=\"webhook-2247/e2e-test-webhook\"\nI0615 03:30:29.599652 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:29.627438 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=32\nI0615 03:30:29.632568 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.948919ms\"\nI0615 03:30:29.632777 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:29.659827 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:30:29.664208 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.599269ms\"\nI0615 03:30:31.235948 10 service.go:322] \"Service updated ports\" service=\"services-9637/clusterip-service\" portCount=1\nI0615 03:30:31.236002 10 service.go:437] \"Adding new service port\" portName=\"services-9637/clusterip-service\" servicePort=\"172.20.15.193:80/TCP\"\nI0615 03:30:31.236063 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:31.263101 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:30:31.273375 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.374423ms\"\nI0615 03:30:31.379580 10 service.go:322] \"Service updated ports\" service=\"services-9637/externalsvc\" portCount=1\nI0615 03:30:32.275638 10 service.go:437] \"Adding new service port\" portName=\"services-9637/externalsvc\" servicePort=\"172.20.27.34:80/TCP\"\nI0615 03:30:32.275710 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:32.353335 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:30:32.360552 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"84.945492ms\"\nI0615 03:30:33.601748 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:33.632297 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:30:33.637514 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.953889ms\"\nI0615 03:30:35.919898 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:35.945863 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=18 numNATRules=37\nI0615 03:30:35.949853 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.004893ms\"\nI0615 03:30:38.118853 10 service.go:322] \"Service updated ports\" service=\"services-9637/clusterip-service\" portCount=0\nI0615 03:30:38.118897 10 service.go:462] \"Removing service port\" portName=\"services-9637/clusterip-service\"\nI0615 03:30:38.118935 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:38.148277 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=37\nI0615 03:30:38.151349 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.452173ms\"\nI0615 03:30:38.151404 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:38.175179 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=37\nI0615 03:30:38.179724 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.34517ms\"\nI0615 03:30:42.529414 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:42.557667 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=35\nI0615 03:30:42.561583 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.216947ms\"\nI0615 03:30:43.534793 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:43.561010 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=32\nI0615 03:30:43.564727 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.988301ms\"\nI0615 03:30:44.020910 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:44.066141 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:30:44.072110 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.266336ms\"\nI0615 03:30:45.072318 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:45.096378 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:30:45.099866 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.638623ms\"\nI0615 03:30:45.717542 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:45.741641 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:30:45.744877 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.372723ms\"\nI0615 03:30:45.863252 10 service.go:322] \"Service updated ports\" service=\"services-9637/externalsvc\" portCount=0\nI0615 03:30:46.745098 10 service.go:462] \"Removing service port\" portName=\"services-9637/externalsvc\"\nI0615 03:30:46.745166 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:30:46.775213 10 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:30:46.778653 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.588975ms\"\nI0615 03:31:05.170026 10 service.go:322] \"Service updated ports\" service=\"services-219/affinity-clusterip-transition\" portCount=1\nI0615 03:31:05.170122 10 service.go:437] \"Adding new service port\" portName=\"services-219/affinity-clusterip-transition\" servicePort=\"172.20.23.39:80/TCP\"\nI0615 03:31:05.170158 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:05.196292 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:31:05.199512 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.403446ms\"\nI0615 03:31:05.199572 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:05.227334 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:31:05.230451 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.864366ms\"\nI0615 03:31:10.735325 10 service.go:322] \"Service updated ports\" service=\"dns-4940/dns-test-service-3\" portCount=1\nI0615 03:31:10.735377 10 service.go:437] \"Adding new service port\" portName=\"dns-4940/dns-test-service-3:http\" servicePort=\"172.20.28.114:80/TCP\"\nI0615 03:31:10.735412 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:10.768830 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:31:10.774571 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.199553ms\"\nI0615 03:31:11.519709 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:11.550179 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=35\nI0615 03:31:11.554717 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.047629ms\"\nI0615 03:31:11.635553 10 service.go:322] \"Service updated ports\" service=\"conntrack-3538/boom-server\" portCount=1\nI0615 03:31:12.343742 10 service.go:437] \"Adding new service port\" portName=\"conntrack-3538/boom-server\" servicePort=\"172.20.30.195:9000/TCP\"\nI0615 03:31:12.343891 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:12.413268 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=47\nI0615 03:31:12.425181 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"81.68128ms\"\nI0615 03:31:14.069584 10 service.go:322] \"Service updated ports\" service=\"dns-4940/dns-test-service-3\" portCount=0\nI0615 03:31:14.069627 10 service.go:462] \"Removing service port\" portName=\"dns-4940/dns-test-service-3:http\"\nI0615 03:31:14.069667 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:14.098605 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=47\nI0615 03:31:14.102324 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.697634ms\"\nI0615 03:31:35.392670 10 service.go:322] \"Service updated ports\" service=\"services-219/affinity-clusterip-transition\" portCount=1\nI0615 03:31:35.392720 10 service.go:439] \"Updating existing service port\" portName=\"services-219/affinity-clusterip-transition\" servicePort=\"172.20.23.39:80/TCP\"\nI0615 03:31:35.392761 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:35.443770 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:31:35.447787 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.071517ms\"\nI0615 03:31:37.357756 10 service.go:322] \"Service updated ports\" service=\"services-219/affinity-clusterip-transition\" portCount=1\nI0615 03:31:37.357806 10 service.go:439] \"Updating existing service port\" portName=\"services-219/affinity-clusterip-transition\" servicePort=\"172.20.23.39:80/TCP\"\nI0615 03:31:37.357833 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:37.402882 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=47\nI0615 03:31:37.408941 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.128459ms\"\nI0615 03:31:39.741813 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:39.773148 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:31:39.776960 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.219896ms\"\nI0615 03:31:40.746980 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:40.808925 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=37\nI0615 03:31:40.818097 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.199474ms\"\nI0615 03:31:41.623992 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:41.650056 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:31:41.653840 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.931436ms\"\nI0615 03:31:41.813705 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:41.837749 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:31:41.842162 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.518679ms\"\nI0615 03:31:43.974821 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:44.009132 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:31:44.012763 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.041565ms\"\nI0615 03:31:44.441641 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:44.478051 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:31:44.483579 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.00197ms\"\nI0615 03:31:44.538836 10 service.go:322] \"Service updated ports\" service=\"webhook-4280/e2e-test-webhook\" portCount=1\nI0615 03:31:45.484055 10 service.go:437] \"Adding new service port\" portName=\"webhook-4280/e2e-test-webhook\" servicePort=\"172.20.7.37:8443/TCP\"\nI0615 03:31:45.484162 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:45.518368 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=38\nI0615 03:31:45.524412 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.385775ms\"\nI0615 03:31:46.302542 10 service.go:322] \"Service updated ports\" service=\"services-6649/up-down-1\" portCount=1\nI0615 03:31:46.302591 10 service.go:437] \"Adding new service port\" portName=\"services-6649/up-down-1\" servicePort=\"172.20.28.65:80/TCP\"\nI0615 03:31:46.302629 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:46.349870 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=38\nI0615 03:31:46.354400 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.809918ms\"\nI0615 03:31:46.761055 10 service.go:322] \"Service updated ports\" service=\"services-219/affinity-clusterip-transition\" portCount=0\nI0615 03:31:47.355583 10 service.go:462] \"Removing service port\" portName=\"services-219/affinity-clusterip-transition\"\nI0615 03:31:47.355658 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:47.441435 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=38\nI0615 03:31:47.448244 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"92.692143ms\"\nI0615 03:31:48.037010 10 service.go:322] \"Service updated ports\" service=\"webhook-4280/e2e-test-webhook\" portCount=0\nI0615 03:31:48.037056 10 service.go:462] \"Removing service port\" portName=\"webhook-4280/e2e-test-webhook\"\nI0615 03:31:48.037098 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:48.095927 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:31:48.100316 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.249314ms\"\nI0615 03:31:49.100649 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:49.142324 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:31:49.145608 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.038083ms\"\nI0615 03:31:52.498721 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:31:52.526159 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:31:52.530150 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.453949ms\"\nI0615 03:32:01.615780 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:01.642178 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=41\nI0615 03:32:01.647323 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.581442ms\"\nI0615 03:32:16.754531 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:16.792224 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:32:16.797082 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.619228ms\"\nI0615 03:32:19.914346 10 service.go:322] \"Service updated ports\" service=\"services-6649/up-down-2\" portCount=1\nI0615 03:32:19.914394 10 service.go:437] \"Adding new service port\" portName=\"services-6649/up-down-2\" servicePort=\"172.20.29.249:80/TCP\"\nI0615 03:32:19.914464 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:19.940328 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:32:19.944510 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.119942ms\"\nI0615 03:32:19.944741 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:19.971940 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:32:19.976081 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.535384ms\"\nI0615 03:32:21.561234 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:21.590602 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=48\nI0615 03:32:21.595062 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.8946ms\"\nI0615 03:32:22.156040 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:22.191243 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=49\nI0615 03:32:22.196342 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.388155ms\"\nI0615 03:32:22.202174 10 service.go:322] \"Service updated ports\" service=\"conntrack-3538/boom-server\" portCount=0\nI0615 03:32:23.197226 10 service.go:462] \"Removing service port\" portName=\"conntrack-3538/boom-server\"\nI0615 03:32:23.197278 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:23.222049 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=47\nI0615 03:32:23.225815 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.605995ms\"\nI0615 03:32:25.110456 10 service.go:322] \"Service updated ports\" service=\"sctp-1386/sctp-clusterip\" portCount=1\nI0615 03:32:25.110504 10 service.go:437] \"Adding new service port\" portName=\"sctp-1386/sctp-clusterip\" servicePort=\"172.20.14.31:5060/SCTP\"\nI0615 03:32:25.110542 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:25.142602 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=47\nI0615 03:32:25.153767 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.265746ms\"\nI0615 03:32:25.153825 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:25.183571 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=47\nI0615 03:32:25.187997 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.197364ms\"\nI0615 03:32:26.814019 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:26.843867 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:32:26.848206 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.238512ms\"\nI0615 03:32:38.139581 10 service.go:322] \"Service updated ports\" service=\"sctp-1386/sctp-clusterip\" portCount=0\nI0615 03:32:38.139635 10 service.go:462] \"Removing service port\" portName=\"sctp-1386/sctp-clusterip\"\nI0615 03:32:38.139680 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:38.181846 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:32:38.193275 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.629667ms\"\nI0615 03:32:38.193349 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:38.241083 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:32:38.246311 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.991239ms\"\nI0615 03:32:44.424967 10 service.go:322] \"Service updated ports\" service=\"endpointslice-233/example-empty-selector\" portCount=1\nI0615 03:32:44.425020 10 service.go:437] \"Adding new service port\" portName=\"endpointslice-233/example-empty-selector:example\" servicePort=\"172.20.5.47:80/TCP\"\nI0615 03:32:44.425057 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:44.452448 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:32:44.457241 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.225502ms\"\nI0615 03:32:44.457306 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:44.485628 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:32:44.489927 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.64579ms\"\nI0615 03:32:44.860117 10 service.go:322] \"Service updated ports\" service=\"endpointslice-233/example-empty-selector\" portCount=0\nI0615 03:32:45.490365 10 service.go:462] \"Removing service port\" portName=\"endpointslice-233/example-empty-selector:example\"\nI0615 03:32:45.490469 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:45.517518 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:32:45.521804 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.453828ms\"\nI0615 03:32:47.265560 10 service.go:322] \"Service updated ports\" service=\"services-1926/nodeport-test\" portCount=1\nI0615 03:32:47.266176 10 service.go:437] \"Adding new service port\" portName=\"services-1926/nodeport-test:http\" servicePort=\"172.20.10.87:80/TCP\"\nI0615 03:32:47.266231 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:47.471362 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:32:47.482018 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"216.402836ms\"\nI0615 03:32:47.482071 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:47.594843 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:32:47.606112 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"124.052038ms\"\nI0615 03:32:48.873863 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:48.907149 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=57\nI0615 03:32:48.912589 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.774926ms\"\nI0615 03:32:53.130202 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:53.159807 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=55\nI0615 03:32:53.165027 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.922761ms\"\nI0615 03:32:53.165281 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:53.193266 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=50\nI0615 03:32:53.198091 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.899832ms\"\nI0615 03:32:54.199231 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:54.256190 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:32:54.260815 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.684119ms\"\nI0615 03:32:55.270975 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:55.297882 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:32:55.302767 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.843706ms\"\nI0615 03:32:56.303648 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:32:56.329288 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:32:56.333157 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.594778ms\"\nI0615 03:33:01.977759 10 service.go:322] \"Service updated ports\" service=\"services-940/externalname-service\" portCount=1\nI0615 03:33:01.977877 10 service.go:437] \"Adding new service port\" portName=\"services-940/externalname-service:http\" servicePort=\"172.20.16.86:80/TCP\"\nI0615 03:33:01.978017 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:02.005593 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=50\nI0615 03:33:02.009111 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.296679ms\"\nI0615 03:33:02.009164 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:02.033627 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=50\nI0615 03:33:02.040649 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.503295ms\"\nI0615 03:33:03.041240 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:03.050726 10 service.go:322] \"Service updated ports\" service=\"services-6649/up-down-1\" portCount=0\nI0615 03:33:03.070966 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=50\nI0615 03:33:03.075327 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.174576ms\"\nI0615 03:33:03.560584 10 service.go:322] \"Service updated ports\" service=\"webhook-3056/e2e-test-webhook\" portCount=1\nI0615 03:33:04.076350 10 service.go:462] \"Removing service port\" portName=\"services-6649/up-down-1\"\nI0615 03:33:04.076393 10 service.go:437] \"Adding new service port\" portName=\"webhook-3056/e2e-test-webhook\" servicePort=\"172.20.2.136:8443/TCP\"\nI0615 03:33:04.076465 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:04.135496 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=54\nI0615 03:33:04.142257 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.935333ms\"\nI0615 03:33:05.142608 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:05.182514 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=61\nI0615 03:33:05.188371 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.851238ms\"\nI0615 03:33:08.782649 10 service.go:322] \"Service updated ports\" service=\"webhook-3056/e2e-test-webhook\" portCount=0\nI0615 03:33:08.782694 10 service.go:462] \"Removing service port\" portName=\"webhook-3056/e2e-test-webhook\"\nI0615 03:33:08.782735 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:08.809035 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=59\nI0615 03:33:08.813009 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.312714ms\"\nI0615 03:33:08.813085 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:08.837120 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=57\nI0615 03:33:08.840802 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.759001ms\"\nI0615 03:33:21.796984 10 service.go:322] \"Service updated ports\" service=\"services-1926/nodeport-test\" portCount=0\nI0615 03:33:21.797027 10 service.go:462] \"Removing service port\" portName=\"services-1926/nodeport-test:http\"\nI0615 03:33:21.797147 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:21.825850 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=51\nI0615 03:33:21.831658 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.628843ms\"\nI0615 03:33:21.831769 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:21.860571 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=47\nI0615 03:33:21.864812 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.115826ms\"\nI0615 03:33:33.014036 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:33.041012 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:33:33.045062 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.092307ms\"\nI0615 03:33:44.898653 10 service.go:322] \"Service updated ports\" service=\"proxy-5394/e2e-proxy-test-service\" portCount=1\nI0615 03:33:44.898704 10 service.go:437] \"Adding new service port\" portName=\"proxy-5394/e2e-proxy-test-service\" servicePort=\"172.20.28.244:80/TCP\"\nI0615 03:33:44.898743 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:44.945822 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:33:44.950587 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.88485ms\"\nI0615 03:33:44.950679 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:45.004976 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=54\nI0615 03:33:45.011906 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.278754ms\"\nI0615 03:33:52.278781 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:52.317958 10 service.go:322] \"Service updated ports\" service=\"proxy-5394/e2e-proxy-test-service\" portCount=0\nI0615 03:33:52.482742 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=52\nI0615 03:33:52.500688 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"221.96291ms\"\nI0615 03:33:52.500730 10 service.go:462] \"Removing service port\" portName=\"proxy-5394/e2e-proxy-test-service\"\nI0615 03:33:52.500773 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:52.617620 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:33:52.632344 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"131.610781ms\"\nI0615 03:33:55.308326 10 service.go:322] \"Service updated ports\" service=\"services-940/externalname-service\" portCount=0\nI0615 03:33:55.308373 10 service.go:462] \"Removing service port\" portName=\"services-940/externalname-service:http\"\nI0615 03:33:55.308413 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:55.335101 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=44\nI0615 03:33:55.339220 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.847383ms\"\nI0615 03:33:55.339302 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:55.362840 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=40\nI0615 03:33:55.366574 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.31272ms\"\nI0615 03:33:57.432662 10 service.go:322] \"Service updated ports\" service=\"webhook-2018/e2e-test-webhook\" portCount=1\nI0615 03:33:57.432723 10 service.go:437] \"Adding new service port\" portName=\"webhook-2018/e2e-test-webhook\" servicePort=\"172.20.15.54:8443/TCP\"\nI0615 03:33:57.432764 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:57.467299 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=40\nI0615 03:33:57.471907 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.190009ms\"\nI0615 03:33:57.472034 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:57.512660 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:33:57.517683 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.680845ms\"\nI0615 03:33:59.960844 10 service.go:322] \"Service updated ports\" service=\"services-6649/up-down-3\" portCount=1\nI0615 03:33:59.960894 10 service.go:437] \"Adding new service port\" portName=\"services-6649/up-down-3\" servicePort=\"172.20.0.230:80/TCP\"\nI0615 03:33:59.960945 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:33:59.987026 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:33:59.991228 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.339956ms\"\nI0615 03:33:59.991445 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:00.021696 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:34:00.025886 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.474582ms\"\nI0615 03:34:03.357325 10 service.go:322] \"Service updated ports\" service=\"webhook-2018/e2e-test-webhook\" portCount=0\nI0615 03:34:03.357358 10 service.go:462] \"Removing service port\" portName=\"webhook-2018/e2e-test-webhook\"\nI0615 03:34:03.357397 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:03.382633 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=42\nI0615 03:34:03.385922 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.561284ms\"\nI0615 03:34:03.386012 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:03.423583 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=40\nI0615 03:34:03.430766 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.79928ms\"\nI0615 03:34:05.702332 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:05.726895 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:34:05.730069 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.806507ms\"\nI0615 03:34:26.010724 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:26.036854 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=47\nI0615 03:34:26.041834 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.181717ms\"\nI0615 03:34:27.872126 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:27.907306 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:34:27.911121 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.091011ms\"\nI0615 03:34:32.011085 10 service.go:322] \"Service updated ports\" service=\"services-2789/nodeport-range-test\" portCount=1\nI0615 03:34:32.011125 10 service.go:437] \"Adding new service port\" portName=\"services-2789/nodeport-range-test\" servicePort=\"172.20.4.237:80/TCP\"\nI0615 03:34:32.011155 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:32.036391 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:34:32.040486 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.35969ms\"\nI0615 03:34:32.040569 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:32.070357 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:34:32.074732 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.207051ms\"\nI0615 03:34:32.121728 10 service.go:322] \"Service updated ports\" service=\"deployment-4350/test-rolling-update-with-lb\" portCount=1\nI0615 03:34:32.144675 10 service.go:322] \"Service updated ports\" service=\"deployment-4350/test-rolling-update-with-lb\" portCount=1\nI0615 03:34:32.451436 10 service.go:322] \"Service updated ports\" service=\"services-2789/nodeport-range-test\" portCount=0\nI0615 03:34:33.075592 10 service.go:437] \"Adding new service port\" portName=\"deployment-4350/test-rolling-update-with-lb\" servicePort=\"172.20.31.156:80/TCP\"\nI0615 03:34:33.075627 10 service.go:462] \"Removing service port\" portName=\"services-2789/nodeport-range-test\"\nI0615 03:34:33.075744 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:33.102366 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=65\nI0615 03:34:33.106984 10 service_health.go:124] \"Opening healthcheck\" service=\"deployment-4350/test-rolling-update-with-lb\" port=31657\nI0615 03:34:33.107063 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.511874ms\"\nI0615 03:34:59.426458 10 service.go:322] \"Service updated ports\" service=\"services-6649/up-down-2\" portCount=0\nI0615 03:34:59.426542 10 service.go:462] \"Removing service port\" portName=\"services-6649/up-down-2\"\nI0615 03:34:59.426596 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:59.439986 10 service.go:322] \"Service updated ports\" service=\"services-6649/up-down-3\" portCount=0\nI0615 03:34:59.466162 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=59\nI0615 03:34:59.470380 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.883211ms\"\nI0615 03:34:59.470647 10 service.go:462] \"Removing service port\" portName=\"services-6649/up-down-3\"\nI0615 03:34:59.470909 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:34:59.518691 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=49\nI0615 03:34:59.525867 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.349879ms\"\nI0615 03:35:28.172102 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:35:28.197138 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:35:28.200636 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.60625ms\"\nI0615 03:35:28.200856 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:35:28.230296 10 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:35:28.232450 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.590877ms\"\nI0615 03:36:05.990159 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:06.018810 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:36:06.023440 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.359668ms\"\nI0615 03:36:06.023473 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:06.046515 10 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:36:06.048648 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"25.171861ms\"\nI0615 03:36:38.569178 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-6062/e2e-test-crd-conversion-webhook\" portCount=1\nI0615 03:36:38.569483 10 service.go:437] \"Adding new service port\" portName=\"crd-webhook-6062/e2e-test-crd-conversion-webhook\" servicePort=\"172.20.23.113:9443/TCP\"\nI0615 03:36:38.569555 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:38.605808 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0615 03:36:38.610020 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.559813ms\"\nI0615 03:36:38.610209 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:38.634777 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=49\nI0615 03:36:38.638900 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.840684ms\"\nI0615 03:36:44.285980 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-6062/e2e-test-crd-conversion-webhook\" portCount=0\nI0615 03:36:44.286023 10 service.go:462] \"Removing service port\" portName=\"crd-webhook-6062/e2e-test-crd-conversion-webhook\"\nI0615 03:36:44.286069 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:44.312116 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=47\nI0615 03:36:44.315701 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.679108ms\"\nI0615 03:36:44.316022 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:44.364261 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:36:44.369674 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.934ms\"\nI0615 03:36:53.103833 10 service.go:322] \"Service updated ports\" service=\"conntrack-6160/svc-udp\" portCount=1\nI0615 03:36:53.103891 10 service.go:437] \"Adding new service port\" portName=\"conntrack-6160/svc-udp:udp\" servicePort=\"172.20.21.240:80/UDP\"\nI0615 03:36:53.103939 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:53.128667 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0615 03:36:53.132727 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.84081ms\"\nI0615 03:36:53.133014 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:53.159347 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0615 03:36:53.162991 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.011423ms\"\nI0615 03:36:57.023258 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:36:57.081154 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0615 03:36:57.085416 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.222798ms\"\nI0615 03:37:00.236125 10 service.go:322] \"Service updated ports\" service=\"sctp-607/sctp-endpoint-test\" portCount=1\nI0615 03:37:00.236182 10 service.go:437] \"Adding new service port\" portName=\"sctp-607/sctp-endpoint-test\" servicePort=\"172.20.11.118:5060/SCTP\"\nI0615 03:37:00.236231 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:00.261776 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=45\nI0615 03:37:00.267410 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.23647ms\"\nI0615 03:37:00.267486 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:00.292541 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=45\nI0615 03:37:00.296781 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.331092ms\"\nI0615 03:37:05.080830 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:05.118337 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=49\nI0615 03:37:05.124203 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.431993ms\"\nI0615 03:37:06.118099 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:06.146031 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=47\nI0615 03:37:06.151234 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.292836ms\"\nI0615 03:37:06.151457 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:06.180623 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=45\nI0615 03:37:06.186034 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.622622ms\"\nI0615 03:37:13.090765 10 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-6160/svc-udp:udp\" clusterIP=\"172.20.21.240\"\nI0615 03:37:13.090790 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:13.115374 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=49\nI0615 03:37:13.122170 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.513046ms\"\nI0615 03:37:15.788902 10 service.go:322] \"Service updated ports\" service=\"sctp-607/sctp-endpoint-test\" portCount=0\nI0615 03:37:15.788948 10 service.go:462] \"Removing service port\" portName=\"sctp-607/sctp-endpoint-test\"\nI0615 03:37:15.789065 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:15.824118 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=49\nI0615 03:37:15.828551 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.602176ms\"\nI0615 03:37:15.828868 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:15.863034 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=49\nI0615 03:37:15.866820 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.047804ms\"\nI0615 03:37:21.136117 10 service.go:322] \"Service updated ports\" service=\"aggregator-1238/sample-api\" portCount=1\nI0615 03:37:21.136206 10 service.go:437] \"Adding new service port\" portName=\"aggregator-1238/sample-api\" servicePort=\"172.20.22.243:7443/TCP\"\nI0615 03:37:21.136257 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:21.163151 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=49\nI0615 03:37:21.167846 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.645835ms\"\nI0615 03:37:21.167923 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:21.196094 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=49\nI0615 03:37:21.201066 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.182226ms\"\nI0615 03:37:25.659920 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:25.686676 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=47\nI0615 03:37:25.695044 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.203879ms\"\nI0615 03:37:25.759914 10 service.go:322] \"Service updated ports\" service=\"conntrack-6160/svc-udp\" portCount=0\nI0615 03:37:25.759958 10 service.go:462] \"Removing service port\" portName=\"conntrack-6160/svc-udp:udp\"\nI0615 03:37:25.760008 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:25.797928 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0615 03:37:25.808108 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.139255ms\"\nI0615 03:37:26.808331 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:26.834260 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=49\nI0615 03:37:26.837602 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.344466ms\"\nI0615 03:37:30.684102 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:30.745590 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=52\nI0615 03:37:30.754955 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.914443ms\"\nI0615 03:37:30.755074 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:30.819713 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=50\nI0615 03:37:30.830811 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"75.81734ms\"\nI0615 03:37:31.383990 10 service.go:322] \"Service updated ports\" service=\"aggregator-1238/sample-api\" portCount=0\nI0615 03:37:31.689915 10 service.go:462] \"Removing service port\" portName=\"aggregator-1238/sample-api\"\nI0615 03:37:31.690032 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:31.716742 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=48\nI0615 03:37:31.720374 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.465874ms\"\nI0615 03:37:32.721562 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:32.747113 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:37:32.750351 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.873406ms\"\nI0615 03:37:33.751421 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:33.788198 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:37:33.793425 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.11308ms\"\nI0615 03:37:37.932909 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:37.964965 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=49\nI0615 03:37:37.969474 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.67828ms\"\nI0615 03:37:38.007160 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:38.067364 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:37:38.073729 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.663768ms\"\nI0615 03:37:39.074690 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:39.099033 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:37:39.102267 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.665192ms\"\nI0615 03:37:40.106593 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:40.142944 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:37:40.147322 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.813678ms\"\nI0615 03:37:49.063929 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:49.091158 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=48\nI0615 03:37:49.095966 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.106388ms\"\nI0615 03:37:49.101506 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:49.127436 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:37:49.131240 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.796312ms\"\nI0615 03:37:50.067727 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:50.097824 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:37:50.102000 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.475968ms\"\nI0615 03:37:54.922302 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:54.968644 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=48\nI0615 03:37:54.974116 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.915712ms\"\nI0615 03:37:55.035119 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:37:55.070289 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:37:55.075396 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.343879ms\"\nI0615 03:38:00.263203 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:00.291438 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=48\nI0615 03:38:00.297158 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.035697ms\"\nI0615 03:38:00.297689 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:00.331869 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:38:00.347866 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"50.269232ms\"\nI0615 03:38:01.347218 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:01.384270 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=49\nI0615 03:38:01.394253 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.10832ms\"\nI0615 03:38:02.395451 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:02.423404 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=46\nI0615 03:38:02.428248 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.89134ms\"\nI0615 03:38:24.508995 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:24.537076 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:38:24.541732 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.795345ms\"\nI0615 03:38:26.435579 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:26.469637 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=45\nI0615 03:38:26.473600 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.091251ms\"\nI0615 03:38:27.729976 10 service.go:322] \"Service updated ports\" service=\"services-2404/affinity-clusterip\" portCount=1\nI0615 03:38:27.730029 10 service.go:437] \"Adding new service port\" portName=\"services-2404/affinity-clusterip\" servicePort=\"172.20.18.29:80/TCP\"\nI0615 03:38:27.730074 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:27.754064 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0615 03:38:27.757496 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.474882ms\"\nI0615 03:38:27.757673 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:27.783042 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=45\nI0615 03:38:27.787410 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.774157ms\"\nI0615 03:38:29.461573 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:29.516107 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:38:29.521425 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.915171ms\"\nI0615 03:38:30.161369 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:30.187864 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=50\nI0615 03:38:30.192999 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.7121ms\"\nI0615 03:38:31.159251 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:31.183682 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=54\nI0615 03:38:31.187727 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.539397ms\"\nI0615 03:38:32.633004 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:32.659873 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=58\nI0615 03:38:32.663967 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.030511ms\"\nI0615 03:38:36.634904 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:36.663416 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=4 numNATChains=25 numNATRules=58\nI0615 03:38:36.667897 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.865695ms\"\nI0615 03:38:37.100841 10 service.go:322] \"Service updated ports\" service=\"services-1919/service-headless-toggled\" portCount=1\nI0615 03:38:37.100902 10 service.go:437] \"Adding new service port\" portName=\"services-1919/service-headless-toggled\" servicePort=\"172.20.6.132:80/TCP\"\nI0615 03:38:37.100951 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:37.142176 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=25 numNATRules=58\nI0615 03:38:37.147822 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"46.922117ms\"\nI0615 03:38:38.036914 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:38.074982 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=27 numNATRules=62\nI0615 03:38:38.090350 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"53.512343ms\"\nI0615 03:38:39.090622 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:39.134239 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=28 numNATRules=65\nI0615 03:38:39.140064 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.571145ms\"\nI0615 03:38:41.520961 10 service.go:322] \"Service updated ports\" service=\"webhook-8072/e2e-test-webhook\" portCount=1\nI0615 03:38:41.521011 10 service.go:437] \"Adding new service port\" portName=\"webhook-8072/e2e-test-webhook\" servicePort=\"172.20.27.110:8443/TCP\"\nI0615 03:38:41.521121 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:41.556023 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=15 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=65\nI0615 03:38:41.560561 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.554346ms\"\nI0615 03:38:41.560664 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:41.585390 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=69\nI0615 03:38:41.589981 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.383639ms\"\nI0615 03:38:43.433437 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:43.474299 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=72\nI0615 03:38:43.478832 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.482684ms\"\nI0615 03:38:43.533664 10 service.go:322] \"Service updated ports\" service=\"services-7172/affinity-clusterip-timeout\" portCount=1\nI0615 03:38:43.533711 10 service.go:437] \"Adding new service port\" portName=\"services-7172/affinity-clusterip-timeout\" servicePort=\"172.20.25.41:80/TCP\"\nI0615 03:38:43.533745 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:43.568906 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=17 numFilterChains=4 numFilterRules=5 numNATChains=31 numNATRules=72\nI0615 03:38:43.573152 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.443294ms\"\nI0615 03:38:44.525858 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:44.553437 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=18 numFilterChains=4 numFilterRules=4 numNATChains=33 numNATRules=77\nI0615 03:38:44.558777 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.9698ms\"\nI0615 03:38:46.035468 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:46.060663 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=19 numFilterChains=4 numFilterRules=4 numNATChains=34 numNATRules=81\nI0615 03:38:46.065154 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.746671ms\"\nI0615 03:38:48.909243 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:48.949383 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=19 numFilterChains=4 numFilterRules=4 numNATChains=34 numNATRules=78\nI0615 03:38:48.954004 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.885298ms\"\nI0615 03:38:48.954274 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:48.985313 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=19 numFilterChains=4 numFilterRules=5 numNATChains=33 numNATRules=71\nI0615 03:38:48.990636 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.444896ms\"\nI0615 03:38:50.135798 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:50.164685 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=19 numFilterChains=4 numFilterRules=5 numNATChains=30 numNATRules=68\nI0615 03:38:50.170624 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.886993ms\"\nI0615 03:38:51.171577 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:51.198155 10 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=17 numFilterChains=4 numFilterRules=5 numNATChains=30 numNATRules=68\nI0615 03:38:51.203265 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.76341ms\"\nI0615 03:38:58.054660 10 service.go:322] \"Service updated ports\" service=\"webhook-8072/e2e-test-webhook\" portCount=0\nI0615 03:38:58.054694 10 service.go:462] \"Removing service port\" portName=\"webhook-8072/e2e-test-webhook\"\nI0615 03:38:58.054741 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:58.084640 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=30 numNATRules=66\nI0615 03:38:58.089661 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.963716ms\"\nI0615 03:38:58.090626 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:58.125854 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=64\nI0615 03:38:58.131336 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.742672ms\"\nI0615 03:38:59.132019 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:38:59.160486 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=68\nI0615 03:38:59.165580 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.711277ms\"\nI0615 03:39:00.166120 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:00.207817 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=68\nI0615 03:39:00.214642 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.62569ms\"\nI0615 03:39:00.237452 10 service.go:322] \"Service updated ports\" service=\"services-2404/affinity-clusterip\" portCount=0\nI0615 03:39:01.215422 10 service.go:462] \"Removing service port\" portName=\"services-2404/affinity-clusterip\"\nI0615 03:39:01.215498 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:01.241291 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=29 numNATRules=68\nI0615 03:39:01.246295 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.906909ms\"\nI0615 03:39:04.972596 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\" portCount=1\nI0615 03:39:04.972643 10 service.go:437] \"Adding new service port\" portName=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\" servicePort=\"172.20.31.218:9443/TCP\"\nI0615 03:39:04.972697 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:05.004065 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=5 numNATChains=29 numNATRules=68\nI0615 03:39:05.008789 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.149813ms\"\nI0615 03:39:05.009023 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:05.040537 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=72\nI0615 03:39:05.047321 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.488219ms\"\nI0615 03:39:07.946652 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:08.001583 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=17 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=70\nI0615 03:39:08.013810 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"67.233379ms\"\nI0615 03:39:08.013941 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:08.053681 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=16 numFilterChains=4 numFilterRules=6 numNATChains=30 numNATRules=62\nI0615 03:39:08.058840 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.981799ms\"\nI0615 03:39:09.059683 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:09.089013 10 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=14 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=57\nI0615 03:39:09.094140 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.5627ms\"\nI0615 03:39:10.091049 10 service.go:322] \"Service updated ports\" service=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\" portCount=0\nI0615 03:39:10.091089 10 service.go:462] \"Removing service port\" portName=\"crd-webhook-4573/e2e-test-crd-conversion-webhook\"\nI0615 03:39:10.091141 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:10.151110 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=25 numNATRules=55\nI0615 03:39:10.160690 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.594453ms\"\nI0615 03:39:11.161138 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:11.186727 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=53\nI0615 03:39:11.191155 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.081562ms\"\nI0615 03:39:17.536828 10 service.go:322] \"Service updated ports\" service=\"services-1919/service-headless-toggled\" portCount=0\nI0615 03:39:17.536859 10 service.go:462] \"Removing service port\" portName=\"services-1919/service-headless-toggled\"\nI0615 03:39:17.536895 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:17.576577 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=23 numNATRules=47\nI0615 03:39:17.581172 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.30794ms\"\nI0615 03:39:19.141173 10 service.go:322] \"Service updated ports\" service=\"webhook-5351/e2e-test-webhook\" portCount=1\nI0615 03:39:19.141212 10 service.go:437] \"Adding new service port\" portName=\"webhook-5351/e2e-test-webhook\" servicePort=\"172.20.19.179:8443/TCP\"\nI0615 03:39:19.141246 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:19.172828 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=19 numNATRules=43\nI0615 03:39:19.188365 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"47.153143ms\"\nI0615 03:39:19.190292 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:19.230348 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=47\nI0615 03:39:19.239730 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.624812ms\"\nI0615 03:39:21.306992 10 service.go:322] \"Service updated ports\" service=\"webhook-5351/e2e-test-webhook\" portCount=0\nI0615 03:39:21.307046 10 service.go:462] \"Removing service port\" portName=\"webhook-5351/e2e-test-webhook\"\nI0615 03:39:21.307097 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:21.337853 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=21 numNATRules=45\nI0615 03:39:21.343395 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.345181ms\"\nI0615 03:39:21.343780 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:21.372472 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=43\nI0615 03:39:21.376575 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.142417ms\"\nI0615 03:39:21.846645 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=1\nI0615 03:39:22.279907 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=1\nI0615 03:39:22.377225 10 service.go:437] \"Adding new service port\" portName=\"services-2860/test-service-9s6g6:http\" servicePort=\"172.20.11.75:80/TCP\"\nI0615 03:39:22.377306 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:22.415562 10 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=8 numNATChains=19 numNATRules=43\nI0615 03:39:22.421642 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.216913ms\"\nI0615 03:39:23.003078 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=1\nI0615 03:39:23.291814 10 service.go:322] \"Service updated ports\" service=\"services-2860/test-service-9s6g6\" portCount=0\nI0615 03:39:23.422186 10 service.go:462] \"Removing service port\" portName=\"services-2860/test-service-9s6g6:http\"\nI0615 03:39:23.422248 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:23.448326 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=43\nI0615 03:39:23.453662 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.501296ms\"\nI0615 03:39:29.986632 10 service.go:322] \"Service updated ports\" service=\"deployment-4350/test-rolling-update-with-lb\" portCount=0\nI0615 03:39:29.986677 10 service.go:462] \"Removing service port\" portName=\"deployment-4350/test-rolling-update-with-lb\"\nI0615 03:39:29.986724 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:30.034969 10 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=43\nI0615 03:39:30.043371 10 service_health.go:107] \"Closing healthcheck\" service=\"deployment-4350/test-rolling-update-with-lb\" port=31657\nE0615 03:39:30.043515 10 service_health.go:187] \"Healthcheck closed\" err=\"accept tcp [::]:31657: use of closed network connection\" service=\"deployment-4350/test-rolling-update-with-lb\"\nI0615 03:39:30.043540 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.869218ms\"\nI0615 03:39:31.871169 10 service.go:322] \"Service updated ports\" service=\"services-1919/service-headless-toggled\" portCount=1\nI0615 03:39:31.871221 10 service.go:437] \"Adding new service port\" portName=\"services-1919/service-headless-toggled\" servicePort=\"172.20.6.132:80/TCP\"\nI0615 03:39:31.871269 10 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:39:31.900487 10 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=53\nI0615 03:39:31.906108 10 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.890814ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-i-0a5092cc559ae3bff ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-i-0b28fcd2505512be6 ====\n2022/06/15 03:20:45 Running command:\nCommand env: (log-file=/var/log/kube-proxy.log, also-stdout=true, redirect-stderr=true)\nRun from directory: \nExecutable path: /usr/local/bin/kube-proxy\nArgs (comma-delimited): /usr/local/bin/kube-proxy,--conntrack-max-per-core=131072,--hostname-override=i-0b28fcd2505512be6,--kubeconfig=/var/lib/kube-proxy/kubeconfig,--master=https://api.internal.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io,--oom-score-adj=-998,--v=2\n2022/06/15 03:20:45 Now listening for interrupts\nI0615 03:20:45.648193 11 flags.go:64] FLAG: --add-dir-header=\"false\"\nI0615 03:20:45.648698 11 flags.go:64] FLAG: --alsologtostderr=\"false\"\nI0615 03:20:45.648829 11 flags.go:64] FLAG: --bind-address=\"0.0.0.0\"\nI0615 03:20:45.648906 11 flags.go:64] FLAG: --bind-address-hard-fail=\"false\"\nI0615 03:20:45.648976 11 flags.go:64] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0615 03:20:45.649055 11 flags.go:64] FLAG: --cleanup=\"false\"\nI0615 03:20:45.649126 11 flags.go:64] FLAG: --cluster-cidr=\"\"\nI0615 03:20:45.649200 11 flags.go:64] FLAG: --config=\"\"\nI0615 03:20:45.649278 11 flags.go:64] FLAG: --config-sync-period=\"15m0s\"\nI0615 03:20:45.649359 11 flags.go:64] FLAG: --conntrack-max-per-core=\"131072\"\nI0615 03:20:45.649428 11 flags.go:64] FLAG: --conntrack-min=\"131072\"\nI0615 03:20:45.649618 11 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0615 03:20:45.649627 11 flags.go:64] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0615 03:20:45.649632 11 flags.go:64] FLAG: --detect-local-mode=\"\"\nI0615 03:20:45.649639 11 flags.go:64] FLAG: --feature-gates=\"\"\nI0615 03:20:45.649646 11 flags.go:64] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0615 03:20:45.649665 11 flags.go:64] FLAG: --healthz-port=\"10256\"\nI0615 03:20:45.649671 11 flags.go:64] FLAG: --help=\"false\"\nI0615 03:20:45.649676 11 flags.go:64] FLAG: --hostname-override=\"i-0b28fcd2505512be6\"\nI0615 03:20:45.649681 11 flags.go:64] FLAG: --iptables-masquerade-bit=\"14\"\nI0615 03:20:45.649685 11 flags.go:64] FLAG: --iptables-min-sync-period=\"1s\"\nI0615 03:20:45.649689 11 flags.go:64] FLAG: --iptables-sync-period=\"30s\"\nI0615 03:20:45.649693 11 flags.go:64] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0615 03:20:45.649710 11 flags.go:64] FLAG: --ipvs-min-sync-period=\"0s\"\nI0615 03:20:45.649715 11 flags.go:64] FLAG: --ipvs-scheduler=\"\"\nI0615 03:20:45.649719 11 flags.go:64] FLAG: --ipvs-strict-arp=\"false\"\nI0615 03:20:45.649723 11 flags.go:64] FLAG: --ipvs-sync-period=\"30s\"\nI0615 03:20:45.649728 11 flags.go:64] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0615 03:20:45.649732 11 flags.go:64] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0615 03:20:45.649736 11 flags.go:64] FLAG: --ipvs-udp-timeout=\"0s\"\nI0615 03:20:45.649740 11 flags.go:64] FLAG: --kube-api-burst=\"10\"\nI0615 03:20:45.649745 11 flags.go:64] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0615 03:20:45.649750 11 flags.go:64] FLAG: --kube-api-qps=\"5\"\nI0615 03:20:45.649757 11 flags.go:64] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0615 03:20:45.649762 11 flags.go:64] FLAG: --log-backtrace-at=\":0\"\nI0615 03:20:45.649770 11 flags.go:64] FLAG: --log-dir=\"\"\nI0615 03:20:45.649776 11 flags.go:64] FLAG: --log-file=\"\"\nI0615 03:20:45.649780 11 flags.go:64] FLAG: --log-file-max-size=\"1800\"\nI0615 03:20:45.649785 11 flags.go:64] FLAG: --log-flush-frequency=\"5s\"\nI0615 03:20:45.649789 11 flags.go:64] FLAG: --logtostderr=\"true\"\nI0615 03:20:45.649795 11 flags.go:64] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0615 03:20:45.649800 11 flags.go:64] FLAG: --masquerade-all=\"false\"\nI0615 03:20:45.649806 11 flags.go:64] FLAG: --master=\"https://api.internal.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io\"\nI0615 03:20:45.649812 11 flags.go:64] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0615 03:20:45.649818 11 flags.go:64] FLAG: --metrics-port=\"10249\"\nI0615 03:20:45.649823 11 flags.go:64] FLAG: --nodeport-addresses=\"[]\"\nI0615 03:20:45.649832 11 flags.go:64] FLAG: --one-output=\"false\"\nI0615 03:20:45.649837 11 flags.go:64] FLAG: --oom-score-adj=\"-998\"\nI0615 03:20:45.649842 11 flags.go:64] FLAG: --pod-bridge-interface=\"\"\nI0615 03:20:45.649846 11 flags.go:64] FLAG: --pod-interface-name-prefix=\"\"\nI0615 03:20:45.649851 11 flags.go:64] FLAG: --profiling=\"false\"\nI0615 03:20:45.649855 11 flags.go:64] FLAG: --proxy-mode=\"\"\nI0615 03:20:45.649861 11 flags.go:64] FLAG: --proxy-port-range=\"\"\nI0615 03:20:45.649867 11 flags.go:64] FLAG: --show-hidden-metrics-for-version=\"\"\nI0615 03:20:45.649872 11 flags.go:64] FLAG: --skip-headers=\"false\"\nI0615 03:20:45.649876 11 flags.go:64] FLAG: --skip-log-headers=\"false\"\nI0615 03:20:45.649880 11 flags.go:64] FLAG: --stderrthreshold=\"2\"\nI0615 03:20:45.649886 11 flags.go:64] FLAG: --udp-timeout=\"250ms\"\nI0615 03:20:45.649891 11 flags.go:64] FLAG: --v=\"2\"\nI0615 03:20:45.649895 11 flags.go:64] FLAG: --version=\"false\"\nI0615 03:20:45.649903 11 flags.go:64] FLAG: --vmodule=\"\"\nI0615 03:20:45.649908 11 flags.go:64] FLAG: --write-config-to=\"\"\nI0615 03:20:45.649924 11 server.go:231] \"Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP\"\nI0615 03:20:45.650369 11 feature_gate.go:245] feature gates: &{map[]}\nI0615 03:20:45.650694 11 feature_gate.go:245] feature gates: &{map[]}\nE0615 03:21:15.717013 11 node.go:152] Failed to retrieve node info: Get \"https://api.internal.e2e-e2e-kops-aws-cni-amazon-vpc.test-cncf-aws.k8s.io/api/v1/nodes/i-0b28fcd2505512be6\": dial tcp 203.0.113.123:443: i/o timeout\nI0615 03:21:16.904016 11 node.go:163] Successfully retrieved node IP: 172.20.63.225\nI0615 03:21:16.904058 11 server_others.go:138] \"Detected node IP\" address=\"172.20.63.225\"\nI0615 03:21:16.904117 11 server_others.go:578] \"Unknown proxy mode, assuming iptables proxy\" proxyMode=\"\"\nI0615 03:21:16.904239 11 server_others.go:175] \"DetectLocalMode\" LocalMode=\"ClusterCIDR\"\nI0615 03:21:16.950175 11 server_others.go:206] \"Using iptables Proxier\"\nI0615 03:21:16.950215 11 server_others.go:213] \"kube-proxy running in dual-stack mode\" ipFamily=IPv4\nI0615 03:21:16.950227 11 server_others.go:214] \"Creating dualStackProxier for iptables\"\nI0615 03:21:16.950237 11 server_others.go:485] \"Detect-local-mode set to ClusterCIDR, but no cluster CIDR defined\"\nI0615 03:21:16.950244 11 server_others.go:541] \"Defaulting to no-op detect-local\" detect-local-mode=\"ClusterCIDR\"\nI0615 03:21:16.950268 11 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0615 03:21:16.950344 11 utils.go:431] \"Changed sysctl\" name=\"net/ipv4/conf/all/route_localnet\" before=0 after=1\nI0615 03:21:16.950381 11 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0615 03:21:16.950414 11 proxier.go:319] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0615 03:21:16.950442 11 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0615 03:21:16.950451 11 proxier.go:259] \"Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259\"\nI0615 03:21:16.951141 11 proxier.go:275] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0615 03:21:16.951173 11 proxier.go:319] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0615 03:21:16.951202 11 proxier.go:329] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0615 03:21:16.951351 11 server.go:661] \"Version info\" version=\"v1.24.1\"\nI0615 03:21:16.951360 11 server.go:663] \"Golang settings\" GOGC=\"\" GOMAXPROCS=\"\" GOTRACEBACK=\"\"\nI0615 03:21:16.956223 11 conntrack.go:52] \"Setting nf_conntrack_max\" nf_conntrack_max=262144\nI0615 03:21:16.956331 11 conntrack.go:100] \"Set sysctl\" entry=\"net/netfilter/nf_conntrack_tcp_timeout_close_wait\" value=3600\nI0615 03:21:16.956942 11 config.go:317] \"Starting service config controller\"\nI0615 03:21:16.956966 11 shared_informer.go:255] Waiting for caches to sync for service config\nI0615 03:21:16.957018 11 config.go:226] \"Starting endpoint slice config controller\"\nI0615 03:21:16.957025 11 shared_informer.go:255] Waiting for caches to sync for endpoint slice config\nI0615 03:21:16.957990 11 config.go:444] \"Starting node config controller\"\nI0615 03:21:16.958005 11 shared_informer.go:255] Waiting for caches to sync for node config\nI0615 03:21:16.964294 11 service.go:322] \"Service updated ports\" service=\"default/kubernetes\" portCount=1\nI0615 03:21:16.964372 11 service.go:322] \"Service updated ports\" service=\"kube-system/kube-dns\" portCount=3\nI0615 03:21:16.966471 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:16.966495 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:17.058063 11 shared_informer.go:262] Caches are synced for endpoint slice config\nI0615 03:21:17.058063 11 shared_informer.go:262] Caches are synced for node config\nI0615 03:21:17.058122 11 shared_informer.go:262] Caches are synced for service config\nI0615 03:21:17.058144 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:17.058158 11 proxier.go:812] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0615 03:21:17.058224 11 service.go:437] \"Adding new service port\" portName=\"default/kubernetes:https\" servicePort=\"172.20.0.1:443/TCP\"\nI0615 03:21:17.058241 11 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns\" servicePort=\"172.20.0.10:53/UDP\"\nI0615 03:21:17.058253 11 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:dns-tcp\" servicePort=\"172.20.0.10:53/TCP\"\nI0615 03:21:17.058273 11 service.go:437] \"Adding new service port\" portName=\"kube-system/kube-dns:metrics\" servicePort=\"172.20.0.10:9153/TCP\"\nI0615 03:21:17.058318 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:17.096022 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=9\nI0615 03:21:17.119933 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.733765ms\"\nI0615 03:21:17.119957 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:17.156592 11 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:21:17.160643 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.68397ms\"\nI0615 03:21:53.637053 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:53.662423 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=1 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=9\nI0615 03:21:53.665691 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.648469ms\"\nI0615 03:21:53.665915 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:53.687198 11 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:21:53.688802 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"22.886537ms\"\nI0615 03:21:56.778836 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:56.826608 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=6 numNATChains=6 numNATRules=9\nI0615 03:21:56.833737 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.967739ms\"\nI0615 03:21:57.785410 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"kube-system/kube-dns:dns\" clusterIP=\"172.20.0.10\"\nI0615 03:21:57.785429 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:21:57.809612 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=4 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=21\nI0615 03:21:57.822798 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.471692ms\"\nI0615 03:22:01.729249 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:22:01.754595 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=12 numNATRules=21\nI0615 03:22:01.757604 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.414981ms\"\nI0615 03:22:02.734528 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:22:02.758584 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:22:02.767075 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.686673ms\"\nI0615 03:24:56.343921 11 service.go:322] \"Service updated ports\" service=\"services-1407/no-pods\" portCount=1\nI0615 03:24:56.344012 11 service.go:437] \"Adding new service port\" portName=\"services-1407/no-pods\" servicePort=\"172.20.17.176:80/TCP\"\nI0615 03:24:56.344045 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:56.400212 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:24:56.409851 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"65.865667ms\"\nI0615 03:24:56.409917 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:56.476477 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:24:56.483705 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"73.800472ms\"\nI0615 03:24:57.824854 11 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-replica\" portCount=1\nI0615 03:24:57.824892 11 service.go:437] \"Adding new service port\" portName=\"kubectl-2153/agnhost-replica\" servicePort=\"172.20.7.165:6379/TCP\"\nI0615 03:24:57.824912 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:57.851225 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:24:57.855008 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.116294ms\"\nI0615 03:24:58.855874 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:58.880304 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:24:58.884682 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.83314ms\"\nI0615 03:24:59.707451 11 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-primary\" portCount=1\nI0615 03:24:59.707488 11 service.go:437] \"Adding new service port\" portName=\"kubectl-2153/agnhost-primary\" servicePort=\"172.20.0.118:6379/TCP\"\nI0615 03:24:59.707524 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:24:59.732146 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=30\nI0615 03:24:59.735841 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.351537ms\"\nI0615 03:25:00.463940 11 service.go:322] \"Service updated ports\" service=\"kubectl-2153/frontend\" portCount=1\nI0615 03:25:00.463989 11 service.go:437] \"Adding new service port\" portName=\"kubectl-2153/frontend\" servicePort=\"172.20.8.106:80/TCP\"\nI0615 03:25:00.464020 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:00.498401 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=30\nI0615 03:25:00.505026 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.036285ms\"\nI0615 03:25:01.505185 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:01.538783 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=30\nI0615 03:25:01.544584 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.43115ms\"\nI0615 03:25:03.103848 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:03.131080 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=34\nI0615 03:25:03.135642 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.902793ms\"\nI0615 03:25:05.742466 11 service.go:322] \"Service updated ports\" service=\"webhook-2310/e2e-test-webhook\" portCount=1\nI0615 03:25:05.742576 11 service.go:437] \"Adding new service port\" portName=\"webhook-2310/e2e-test-webhook\" servicePort=\"172.20.26.53:8443/TCP\"\nI0615 03:25:05.742639 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:05.787917 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=8 numFilterChains=4 numFilterRules=7 numNATChains=17 numNATRules=34\nI0615 03:25:05.794083 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.506946ms\"\nI0615 03:25:05.794170 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:05.830186 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=9 numFilterChains=4 numFilterRules=6 numNATChains=19 numNATRules=38\nI0615 03:25:05.834365 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.242024ms\"\nI0615 03:25:06.835339 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:06.859547 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=21 numNATRules=42\nI0615 03:25:06.862858 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.589292ms\"\nI0615 03:25:08.702658 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:08.727279 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=45\nI0615 03:25:08.731127 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.506965ms\"\nI0615 03:25:09.704114 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:09.733031 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=49\nI0615 03:25:09.736976 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.909457ms\"\nI0615 03:25:13.578459 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:13.626033 11 service.go:322] \"Service updated ports\" service=\"services-1407/no-pods\" portCount=0\nI0615 03:25:13.642410 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=24 numNATRules=49\nI0615 03:25:13.647084 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.642462ms\"\nI0615 03:25:13.647128 11 service.go:462] \"Removing service port\" portName=\"services-1407/no-pods\"\nI0615 03:25:13.647162 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:13.681865 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=49\nI0615 03:25:13.687234 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.104892ms\"\nI0615 03:25:15.400098 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:15.433269 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=25 numNATRules=52\nI0615 03:25:15.438108 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.070226ms\"\nI0615 03:25:15.445709 11 service.go:322] \"Service updated ports\" service=\"webhook-2310/e2e-test-webhook\" portCount=0\nI0615 03:25:16.402864 11 service.go:462] \"Removing service port\" portName=\"webhook-2310/e2e-test-webhook\"\nI0615 03:25:16.403125 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:16.428924 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=53\nI0615 03:25:16.433424 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.5697ms\"\nI0615 03:25:18.850356 11 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-replica\" portCount=0\nI0615 03:25:18.850395 11 service.go:462] \"Removing service port\" portName=\"kubectl-2153/agnhost-replica\"\nI0615 03:25:18.850428 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:18.905362 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=24 numNATRules=47\nI0615 03:25:18.911784 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.382995ms\"\nI0615 03:25:18.911873 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:18.935694 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:25:18.939754 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.930061ms\"\nI0615 03:25:19.523403 11 service.go:322] \"Service updated ports\" service=\"kubectl-2153/agnhost-primary\" portCount=0\nI0615 03:25:19.721381 11 service.go:322] \"Service updated ports\" service=\"webhook-2293/e2e-test-webhook\" portCount=1\nI0615 03:25:19.940677 11 service.go:462] \"Removing service port\" portName=\"kubectl-2153/agnhost-primary\"\nI0615 03:25:19.940731 11 service.go:437] \"Adding new service port\" portName=\"webhook-2293/e2e-test-webhook\" servicePort=\"172.20.19.169:8443/TCP\"\nI0615 03:25:19.940805 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:19.968581 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=46\nI0615 03:25:19.972861 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.202154ms\"\nI0615 03:25:20.202873 11 service.go:322] \"Service updated ports\" service=\"kubectl-2153/frontend\" portCount=0\nI0615 03:25:20.973875 11 service.go:462] \"Removing service port\" portName=\"kubectl-2153/frontend\"\nI0615 03:25:20.974055 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:21.021147 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=38\nI0615 03:25:21.029158 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"55.305162ms\"\nI0615 03:25:22.071530 11 service.go:322] \"Service updated ports\" service=\"webhook-2293/e2e-test-webhook\" portCount=0\nI0615 03:25:22.071572 11 service.go:462] \"Removing service port\" portName=\"webhook-2293/e2e-test-webhook\"\nI0615 03:25:22.071609 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:22.098112 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=32\nI0615 03:25:22.101848 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.274817ms\"\nI0615 03:25:23.102040 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:23.126946 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:25:23.130603 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.630394ms\"\nI0615 03:25:25.188194 11 service.go:322] \"Service updated ports\" service=\"services-477/sourceip-test\" portCount=1\nI0615 03:25:25.188243 11 service.go:437] \"Adding new service port\" portName=\"services-477/sourceip-test\" servicePort=\"172.20.2.117:8080/TCP\"\nI0615 03:25:25.188274 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:25.217882 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:25:25.221912 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.672849ms\"\nI0615 03:25:25.221955 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:25.246808 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:25:25.250108 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.160367ms\"\nI0615 03:25:28.602715 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingtdhbm\"\nI0615 03:25:28.747575 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingj995w\"\nI0615 03:25:28.892031 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:29.763525 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:30.051270 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:30.195276 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ing8npc6\"\nI0615 03:25:30.627206 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingj995w\"\nI0615 03:25:30.629352 11 endpoints.go:276] \"Error getting endpoint slice cache keys\" err=\"no kubernetes.io/service-name label set on endpoint slice: e2e-example-ingtdhbm\"\nI0615 03:25:33.538797 11 service.go:322] \"Service updated ports\" service=\"services-6951/nodeport-service\" portCount=1\nI0615 03:25:33.538849 11 service.go:437] \"Adding new service port\" portName=\"services-6951/nodeport-service\" servicePort=\"172.20.12.209:80/TCP\"\nI0615 03:25:33.538877 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:33.596215 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=30\nI0615 03:25:33.601374 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"62.526413ms\"\nI0615 03:25:33.601445 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:33.638220 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=6 numNATChains=15 numNATRules=30\nI0615 03:25:33.645675 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.251697ms\"\nI0615 03:25:33.693050 11 service.go:322] \"Service updated ports\" service=\"services-6951/externalsvc\" portCount=1\nI0615 03:25:34.646585 11 service.go:437] \"Adding new service port\" portName=\"services-6951/externalsvc\" servicePort=\"172.20.7.95:80/TCP\"\nI0615 03:25:34.646660 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:34.676508 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=7 numFilterChains=4 numFilterRules=7 numNATChains=15 numNATRules=30\nI0615 03:25:34.680194 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.675112ms\"\nI0615 03:25:35.685048 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:35.714859 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=6 numNATChains=17 numNATRules=34\nI0615 03:25:35.718761 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.793625ms\"\nI0615 03:25:37.555463 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:37.589836 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=38\nI0615 03:25:37.594139 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.714608ms\"\nI0615 03:25:38.270077 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:38.294307 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:38.298182 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.165095ms\"\nI0615 03:25:39.428391 11 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-a-x8swd\" portCount=1\nI0615 03:25:39.428579 11 service.go:437] \"Adding new service port\" portName=\"services-6734/e2e-svc-a-x8swd:http\" servicePort=\"172.20.14.127:8001/TCP\"\nI0615 03:25:39.428616 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:39.557692 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=20 numNATRules=41\nI0615 03:25:39.573767 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"145.199337ms\"\nI0615 03:25:39.575992 11 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-b-mv67m\" portCount=1\nI0615 03:25:39.576042 11 service.go:437] \"Adding new service port\" portName=\"services-6734/e2e-svc-b-mv67m:http\" servicePort=\"172.20.5.204:8002/TCP\"\nI0615 03:25:39.576076 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:39.631307 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=20 numNATRules=41\nI0615 03:25:39.639673 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.636677ms\"\nI0615 03:25:39.724443 11 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-c-pw7db\" portCount=1\nI0615 03:25:40.013566 11 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-a-x8swd\" portCount=0\nI0615 03:25:40.018639 11 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-b-mv67m\" portCount=0\nI0615 03:25:40.429393 11 service.go:322] \"Service updated ports\" service=\"services-6951/nodeport-service\" portCount=0\nI0615 03:25:40.640525 11 service.go:462] \"Removing service port\" portName=\"services-6734/e2e-svc-b-mv67m:http\"\nI0615 03:25:40.640554 11 service.go:462] \"Removing service port\" portName=\"services-6951/nodeport-service\"\nI0615 03:25:40.640578 11 service.go:437] \"Adding new service port\" portName=\"services-6734/e2e-svc-c-pw7db:http\" servicePort=\"172.20.29.237:8003/TCP\"\nI0615 03:25:40.640588 11 service.go:462] \"Removing service port\" portName=\"services-6734/e2e-svc-a-x8swd:http\"\nI0615 03:25:40.640624 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:40.666348 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=41\nI0615 03:25:40.669499 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.980924ms\"\nI0615 03:25:42.776582 11 service.go:322] \"Service updated ports\" service=\"dns-7072/test-service-2\" portCount=1\nI0615 03:25:42.776631 11 service.go:437] \"Adding new service port\" portName=\"dns-7072/test-service-2:http\" servicePort=\"172.20.20.14:80/TCP\"\nI0615 03:25:42.776663 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:42.838220 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:42.843483 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.85465ms\"\nI0615 03:25:42.843549 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:42.887805 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:42.891700 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.179289ms\"\nI0615 03:25:45.535602 11 service.go:322] \"Service updated ports\" service=\"services-6734/e2e-svc-c-pw7db\" portCount=0\nI0615 03:25:45.535642 11 service.go:462] \"Removing service port\" portName=\"services-6734/e2e-svc-c-pw7db:http\"\nI0615 03:25:45.535675 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:45.567664 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=41\nI0615 03:25:45.572342 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.694745ms\"\nI0615 03:25:45.699243 11 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service\" portCount=1\nI0615 03:25:45.699289 11 service.go:437] \"Adding new service port\" portName=\"resourcequota-8381/test-service\" servicePort=\"172.20.15.254:80/TCP\"\nI0615 03:25:45.699323 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:45.748708 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:25:45.765845 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.555195ms\"\nI0615 03:25:45.851904 11 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service-np\" portCount=1\nI0615 03:25:46.766763 11 service.go:437] \"Adding new service port\" portName=\"resourcequota-8381/test-service-np\" servicePort=\"172.20.21.180:80/TCP\"\nI0615 03:25:46.766822 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:46.794493 11 proxier.go:1464] \"Reloading service iptables data\" numServices=9 numEndpoints=10 numFilterChains=4 numFilterRules=7 numNATChains=20 numNATRules=41\nI0615 03:25:46.797984 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.247459ms\"\nI0615 03:25:48.301732 11 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service\" portCount=0\nI0615 03:25:48.301772 11 service.go:462] \"Removing service port\" portName=\"resourcequota-8381/test-service\"\nI0615 03:25:48.301822 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:48.349183 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=6 numNATChains=20 numNATRules=41\nI0615 03:25:48.354144 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"52.36452ms\"\nI0615 03:25:48.461042 11 service.go:322] \"Service updated ports\" service=\"resourcequota-8381/test-service-np\" portCount=0\nI0615 03:25:48.850634 11 service.go:462] \"Removing service port\" portName=\"resourcequota-8381/test-service-np\"\nI0615 03:25:48.851303 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:48.893862 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=39\nI0615 03:25:48.898996 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"48.37194ms\"\nI0615 03:25:49.899727 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:49.932403 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=36\nI0615 03:25:49.936495 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.843191ms\"\nI0615 03:25:50.971614 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:50.998727 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:51.002407 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.865969ms\"\nI0615 03:25:52.002668 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:52.068590 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:52.072699 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"70.139901ms\"\nI0615 03:25:53.869175 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:53.902724 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:53.909163 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.025979ms\"\nI0615 03:25:54.071104 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:54.106944 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:25:54.111478 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.427116ms\"\nI0615 03:25:54.246896 11 service.go:322] \"Service updated ports\" service=\"services-6951/externalsvc\" portCount=0\nI0615 03:25:55.112420 11 service.go:462] \"Removing service port\" portName=\"services-6951/externalsvc\"\nI0615 03:25:55.112476 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:55.149861 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:25:55.153915 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.501551ms\"\nI0615 03:25:57.820139 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:25:57.883641 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:25:57.889775 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.678164ms\"\nI0615 03:26:00.032461 11 service.go:322] \"Service updated ports\" service=\"webhook-8409/e2e-test-webhook\" portCount=1\nI0615 03:26:00.032512 11 service.go:437] \"Adding new service port\" portName=\"webhook-8409/e2e-test-webhook\" servicePort=\"172.20.7.255:8443/TCP\"\nI0615 03:26:00.032546 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:00.069083 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=38\nI0615 03:26:00.078372 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"45.861194ms\"\nI0615 03:26:00.078461 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:00.132222 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=42\nI0615 03:26:00.137416 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"59.000328ms\"\nI0615 03:26:00.460835 11 service.go:322] \"Service updated ports\" service=\"services-477/sourceip-test\" portCount=0\nI0615 03:26:01.138329 11 service.go:462] \"Removing service port\" portName=\"services-477/sourceip-test\"\nI0615 03:26:01.138426 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:01.162659 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=40\nI0615 03:26:01.166489 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.180166ms\"\nI0615 03:26:02.573812 11 service.go:322] \"Service updated ports\" service=\"kubectl-4780/agnhost-primary\" portCount=1\nI0615 03:26:02.573860 11 service.go:437] \"Adding new service port\" portName=\"kubectl-4780/agnhost-primary\" servicePort=\"172.20.13.94:6379/TCP\"\nI0615 03:26:02.573895 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:02.634220 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=38\nI0615 03:26:02.642200 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"68.338077ms\"\nI0615 03:26:02.643533 11 service.go:322] \"Service updated ports\" service=\"webhook-8409/e2e-test-webhook\" portCount=0\nI0615 03:26:03.643200 11 service.go:462] \"Removing service port\" portName=\"webhook-8409/e2e-test-webhook\"\nI0615 03:26:03.643276 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:03.668960 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:26:03.672473 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.291425ms\"\nI0615 03:26:07.273680 11 service.go:322] \"Service updated ports\" service=\"services-2131/endpoint-test2\" portCount=1\nI0615 03:26:07.273717 11 service.go:437] \"Adding new service port\" portName=\"services-2131/endpoint-test2\" servicePort=\"172.20.20.172:80/TCP\"\nI0615 03:26:07.273756 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:07.299670 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:26:07.303166 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.450791ms\"\nI0615 03:26:07.303397 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:07.329428 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:26:07.333217 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.999113ms\"\nI0615 03:26:09.840607 11 service.go:322] \"Service updated ports\" service=\"kubectl-4780/agnhost-primary\" portCount=0\nI0615 03:26:09.840817 11 service.go:462] \"Removing service port\" portName=\"kubectl-4780/agnhost-primary\"\nI0615 03:26:09.840919 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:09.875099 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:26:09.879456 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.643181ms\"\nI0615 03:26:09.892446 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:09.926785 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:26:09.930283 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.880392ms\"\nI0615 03:26:10.931120 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:10.955799 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:26:10.959987 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.898749ms\"\nI0615 03:26:19.254483 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:19.299165 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:26:19.304351 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"49.927284ms\"\nI0615 03:26:19.390581 11 service.go:322] \"Service updated ports\" service=\"dns-7072/test-service-2\" portCount=0\nI0615 03:26:19.390624 11 service.go:462] \"Removing service port\" portName=\"dns-7072/test-service-2:http\"\nI0615 03:26:19.390655 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:19.429063 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:19.434293 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.662174ms\"\nI0615 03:26:20.434610 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:20.470123 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:20.474439 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.909643ms\"\nI0615 03:26:24.266263 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:24.293579 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=37\nI0615 03:26:24.297345 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.128516ms\"\nI0615 03:26:25.465927 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:25.492141 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=18 numNATRules=37\nI0615 03:26:25.495537 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.645575ms\"\nI0615 03:26:25.495571 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:25.517022 11 proxier.go:1464] \"Reloading service iptables data\" numServices=0 numEndpoints=0 numFilterChains=4 numFilterRules=3 numNATChains=4 numNATRules=5\nI0615 03:26:25.518754 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"23.178701ms\"\nI0615 03:26:29.006406 11 service.go:322] \"Service updated ports\" service=\"webhook-1668/e2e-test-webhook\" portCount=1\nI0615 03:26:29.006443 11 service.go:437] \"Adding new service port\" portName=\"webhook-1668/e2e-test-webhook\" servicePort=\"172.20.19.166:8443/TCP\"\nI0615 03:26:29.006465 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:29.040318 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=18 numNATRules=37\nI0615 03:26:29.045869 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"39.416634ms\"\nI0615 03:26:29.045954 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:29.095130 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=41\nI0615 03:26:29.100434 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.518324ms\"\nI0615 03:26:29.450742 11 service.go:322] \"Service updated ports\" service=\"services-6962/tolerate-unready\" portCount=1\nI0615 03:26:30.100576 11 service.go:437] \"Adding new service port\" portName=\"services-6962/tolerate-unready:http\" servicePort=\"172.20.24.82:80/TCP\"\nI0615 03:26:30.100634 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:30.129813 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=41\nI0615 03:26:30.138909 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"38.381207ms\"\nI0615 03:26:31.139137 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:31.172074 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=20 numNATRules=39\nI0615 03:26:31.176656 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"37.606068ms\"\nI0615 03:26:34.483891 11 service.go:322] \"Service updated ports\" service=\"webhook-1668/e2e-test-webhook\" portCount=0\nI0615 03:26:34.483933 11 service.go:462] \"Removing service port\" portName=\"webhook-1668/e2e-test-webhook\"\nI0615 03:26:34.483966 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:34.520096 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:26:34.525035 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.091556ms\"\nI0615 03:26:34.525117 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:34.576397 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:26:34.581563 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"56.483892ms\"\nI0615 03:26:35.582044 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:35.606301 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=32\nI0615 03:26:35.609627 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.634755ms\"\nI0615 03:26:36.023317 11 service.go:322] \"Service updated ports\" service=\"services-2131/endpoint-test2\" portCount=0\nI0615 03:26:36.610612 11 service.go:462] \"Removing service port\" portName=\"services-2131/endpoint-test2\"\nI0615 03:26:36.610688 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:36.636746 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:26:36.642069 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.48162ms\"\nI0615 03:26:40.018085 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:40.044044 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:40.047709 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.667591ms\"\nI0615 03:26:53.105782 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:53.134491 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:53.137844 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.112747ms\"\nI0615 03:26:53.530755 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:53.560385 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=32\nI0615 03:26:53.564652 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.952682ms\"\nI0615 03:26:55.272236 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:55.297818 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:26:55.303569 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.384008ms\"\nI0615 03:26:57.030650 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:57.059838 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=32\nI0615 03:26:57.063844 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.226757ms\"\nI0615 03:26:57.603722 11 service.go:322] \"Service updated ports\" service=\"services-6962/tolerate-unready\" portCount=0\nI0615 03:26:57.603772 11 service.go:462] \"Removing service port\" portName=\"services-6962/tolerate-unready:http\"\nI0615 03:26:57.603809 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:57.632698 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:26:57.636426 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.656655ms\"\nI0615 03:26:58.637578 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:26:58.696550 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:26:58.704496 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"66.950765ms\"\nI0615 03:27:02.765756 11 service.go:322] \"Service updated ports\" service=\"services-3265/nodeport-collision-1\" portCount=1\nI0615 03:27:02.765806 11 service.go:437] \"Adding new service port\" portName=\"services-3265/nodeport-collision-1\" servicePort=\"172.20.2.226:80/TCP\"\nI0615 03:27:02.765842 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:02.795257 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:27:02.798373 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.57051ms\"\nI0615 03:27:02.798420 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:02.829885 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:27:02.833846 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.440769ms\"\nI0615 03:27:03.065668 11 service.go:322] \"Service updated ports\" service=\"services-3265/nodeport-collision-1\" portCount=0\nI0615 03:27:03.242942 11 service.go:322] \"Service updated ports\" service=\"services-3265/nodeport-collision-2\" portCount=1\nI0615 03:27:03.834680 11 service.go:462] \"Removing service port\" portName=\"services-3265/nodeport-collision-1\"\nI0615 03:27:03.834737 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:03.866256 11 proxier.go:1464] \"Reloading service iptables data\" numServices=4 numEndpoints=7 numFilterChains=4 numFilterRules=3 numNATChains=15 numNATRules=30\nI0615 03:27:03.870041 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.410887ms\"\nI0615 03:27:06.873778 11 service.go:322] \"Service updated ports\" service=\"kubectl-3933/agnhost-primary\" portCount=1\nI0615 03:27:06.873833 11 service.go:437] \"Adding new service port\" portName=\"kubectl-3933/agnhost-primary\" servicePort=\"172.20.22.55:6379/TCP\"\nI0615 03:27:06.873864 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:06.905038 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:27:06.909398 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.566393ms\"\nI0615 03:27:06.909446 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:06.937258 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=7 numFilterChains=4 numFilterRules=4 numNATChains=15 numNATRules=30\nI0615 03:27:06.941298 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.865907ms\"\nI0615 03:27:07.374358 11 service.go:322] \"Service updated ports\" service=\"dns-4197/test-service-2\" portCount=1\nI0615 03:27:07.941713 11 service.go:437] \"Adding new service port\" portName=\"dns-4197/test-service-2:http\" servicePort=\"172.20.30.40:80/TCP\"\nI0615 03:27:07.941788 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:07.970254 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=7 numFilterChains=4 numFilterRules=5 numNATChains=15 numNATRules=30\nI0615 03:27:07.973819 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.148803ms\"\nI0615 03:27:12.223493 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:12.250072 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:27:12.254496 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.043846ms\"\nI0615 03:27:18.427605 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:18.454674 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=38\nI0615 03:27:18.458790 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.224924ms\"\nI0615 03:27:23.999573 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:24.024590 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=36\nI0615 03:27:24.027883 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.383031ms\"\nI0615 03:27:24.036191 11 service.go:322] \"Service updated ports\" service=\"kubectl-3933/agnhost-primary\" portCount=0\nI0615 03:27:24.036229 11 service.go:462] \"Removing service port\" portName=\"kubectl-3933/agnhost-primary\"\nI0615 03:27:24.036260 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:24.061301 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=8 numFilterChains=4 numFilterRules=3 numNATChains=17 numNATRules=34\nI0615 03:27:24.064591 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.360666ms\"\nI0615 03:27:32.969603 11 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=1\nI0615 03:27:32.969648 11 service.go:437] \"Adding new service port\" portName=\"services-8063/nodeport-update-service\" servicePort=\"172.20.25.229:80/TCP\"\nI0615 03:27:32.969677 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:33.018338 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:27:33.024258 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.609231ms\"\nI0615 03:27:33.024325 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:33.077619 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=4 numNATChains=17 numNATRules=34\nI0615 03:27:33.082991 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"58.690912ms\"\nI0615 03:27:33.267126 11 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=1\nI0615 03:27:34.083132 11 service.go:437] \"Adding new service port\" portName=\"services-8063/nodeport-update-service:tcp-port\" servicePort=\"172.20.25.229:80/TCP\"\nI0615 03:27:34.083160 11 service.go:462] \"Removing service port\" portName=\"services-8063/nodeport-update-service\"\nI0615 03:27:34.083196 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:34.108122 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=8 numFilterChains=4 numFilterRules=5 numNATChains=17 numNATRules=34\nI0615 03:27:34.118306 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.205276ms\"\nI0615 03:27:35.119198 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:35.146233 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=20 numNATRules=41\nI0615 03:27:35.150461 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.322475ms\"\nI0615 03:27:37.114755 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:37.143476 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:27:37.147729 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"33.03988ms\"\nI0615 03:27:38.967332 11 service.go:322] \"Service updated ports\" service=\"pods-1526/fooservice\" portCount=1\nI0615 03:27:38.967386 11 service.go:437] \"Adding new service port\" portName=\"pods-1526/fooservice\" servicePort=\"172.20.20.159:8765/TCP\"\nI0615 03:27:38.967424 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:39.028409 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:27:39.038657 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"71.275291ms\"\nI0615 03:27:39.039829 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:39.098453 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=48\nI0615 03:27:39.102497 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"63.675632ms\"\nI0615 03:27:47.070188 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:47.099997 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=46\nI0615 03:27:47.104262 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.134393ms\"\nI0615 03:27:47.104324 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:47.132125 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=44\nI0615 03:27:47.136178 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.883544ms\"\nI0615 03:27:47.208109 11 service.go:322] \"Service updated ports\" service=\"dns-4197/test-service-2\" portCount=0\nI0615 03:27:48.137436 11 service.go:462] \"Removing service port\" portName=\"dns-4197/test-service-2:http\"\nI0615 03:27:48.137593 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:48.167662 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:27:48.172254 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.83505ms\"\nI0615 03:27:49.560559 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:49.605952 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=21 numNATRules=42\nI0615 03:27:49.614825 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.306987ms\"\nI0615 03:27:49.662444 11 service.go:322] \"Service updated ports\" service=\"pods-1526/fooservice\" portCount=0\nI0615 03:27:50.615644 11 service.go:462] \"Removing service port\" portName=\"pods-1526/fooservice\"\nI0615 03:27:50.615710 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:50.639700 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=40\nI0615 03:27:50.643298 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.668844ms\"\nI0615 03:27:52.040247 11 service.go:322] \"Service updated ports\" service=\"webhook-9884/e2e-test-webhook\" portCount=1\nI0615 03:27:52.040301 11 service.go:437] \"Adding new service port\" portName=\"webhook-9884/e2e-test-webhook\" servicePort=\"172.20.11.212:8443/TCP\"\nI0615 03:27:52.040335 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:52.080398 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=4 numNATChains=19 numNATRules=40\nI0615 03:27:52.085118 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.812881ms\"\nI0615 03:27:52.085189 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:52.113088 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=44\nI0615 03:27:52.116672 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.523319ms\"\nI0615 03:27:54.362098 11 service.go:322] \"Service updated ports\" service=\"webhook-9884/e2e-test-webhook\" portCount=0\nI0615 03:27:54.362130 11 service.go:462] \"Removing service port\" portName=\"webhook-9884/e2e-test-webhook\"\nI0615 03:27:54.362154 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:54.412307 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=21 numNATRules=42\nI0615 03:27:54.416504 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"54.36798ms\"\nI0615 03:27:54.416568 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:27:54.459643 11 proxier.go:1464] \"Reloading service iptables data\" numServices=5 numEndpoints=9 numFilterChains=4 numFilterRules=3 numNATChains=19 numNATRules=40\nI0615 03:27:54.468437 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"51.897521ms\"\nI0615 03:28:01.648760 11 service.go:322] \"Service updated ports\" service=\"services-1772/affinity-nodeport\" portCount=1\nI0615 03:28:01.648892 11 service.go:437] \"Adding new service port\" portName=\"services-1772/affinity-nodeport\" servicePort=\"172.20.25.152:80/TCP\"\nI0615 03:28:01.648929 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:01.673819 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=40\nI0615 03:28:01.677079 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.272161ms\"\nI0615 03:28:01.677128 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:01.704330 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=19 numNATRules=40\nI0615 03:28:01.707764 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.651999ms\"\nI0615 03:28:04.007991 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:04.034142 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=10 numFilterChains=4 numFilterRules=3 numNATChains=22 numNATRules=48\nI0615 03:28:04.044424 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.462986ms\"\nI0615 03:28:05.004368 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:05.030182 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=52\nI0615 03:28:05.035444 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.129708ms\"\nI0615 03:28:14.022459 11 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=2\nI0615 03:28:14.022602 11 service.go:439] \"Updating existing service port\" portName=\"services-8063/nodeport-update-service:tcp-port\" servicePort=\"172.20.25.229:80/TCP\"\nI0615 03:28:14.022687 11 service.go:437] \"Adding new service port\" portName=\"services-8063/nodeport-update-service:udp-port\" servicePort=\"172.20.25.229:80/UDP\"\nI0615 03:28:14.022729 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:14.047454 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=52\nI0615 03:28:14.051621 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.027611ms\"\nI0615 03:28:14.051787 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8063/nodeport-update-service:udp-port\" clusterIP=\"172.20.25.229\"\nI0615 03:28:14.051843 11 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"services-8063/nodeport-update-service:udp-port\" nodePort=31598\nI0615 03:28:14.051851 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:14.076027 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=3 numNATChains=27 numNATRules=62\nI0615 03:28:14.091982 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"40.322698ms\"\nI0615 03:28:16.565353 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:16.622962 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=66\nI0615 03:28:16.627078 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"61.786278ms\"\nI0615 03:28:28.946331 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:28.978063 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=3 numNATChains=28 numNATRules=63\nI0615 03:28:28.983003 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"36.720558ms\"\nI0615 03:28:29.949551 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:29.987788 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=27 numNATRules=54\nI0615 03:28:29.992203 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.74346ms\"\nI0615 03:28:30.568209 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:30.604308 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=14 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:30.611533 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"43.393681ms\"\nI0615 03:28:31.026331 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:31.062351 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:31.068286 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.006304ms\"\nI0615 03:28:32.068441 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:32.092130 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:32.096018 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"27.666755ms\"\nI0615 03:28:32.784793 11 service.go:322] \"Service updated ports\" service=\"services-1772/affinity-nodeport\" portCount=0\nI0615 03:28:33.097124 11 service.go:462] \"Removing service port\" portName=\"services-1772/affinity-nodeport\"\nI0615 03:28:33.097206 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:33.133568 11 proxier.go:1464] \"Reloading service iptables data\" numServices=6 numEndpoints=11 numFilterChains=4 numFilterRules=3 numNATChains=23 numNATRules=50\nI0615 03:28:33.139052 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"41.958964ms\"\nI0615 03:28:38.614874 11 service.go:322] \"Service updated ports\" service=\"conntrack-5332/svc-udp\" portCount=1\nI0615 03:28:38.614921 11 service.go:437] \"Adding new service port\" portName=\"conntrack-5332/svc-udp:udp\" servicePort=\"172.20.26.215:80/UDP\"\nI0615 03:28:38.614950 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:38.640999 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:38.646091 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"31.171129ms\"\nI0615 03:28:38.646161 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:38.668795 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=11 numFilterChains=4 numFilterRules=5 numNATChains=23 numNATRules=50\nI0615 03:28:38.672992 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"26.8604ms\"\nI0615 03:28:45.764035 11 proxier.go:837] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-5332/svc-udp:udp\" clusterIP=\"172.20.26.215\"\nI0615 03:28:45.764118 11 proxier.go:847] \"Stale service\" protocol=\"udp\" servicePortName=\"conntrack-5332/svc-udp:udp\" nodePort=31411\nI0615 03:28:45.764129 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:45.809705 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=12 numFilterChains=4 numFilterRules=3 numNATChains=26 numNATRules=57\nI0615 03:28:45.833556 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"69.629529ms\"\nI0615 03:28:48.670031 11 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-int-port\" portCount=1\nI0615 03:28:48.670082 11 service.go:437] \"Adding new service port\" portName=\"endpointslice-4880/example-int-port:example\" servicePort=\"172.20.13.140:80/TCP\"\nI0615 03:28:48.670117 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:48.696581 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=57\nI0615 03:28:48.702474 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.398727ms\"\nI0615 03:28:48.702546 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:48.730571 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=12 numFilterChains=4 numFilterRules=4 numNATChains=26 numNATRules=57\nI0615 03:28:48.735075 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.563935ms\"\nI0615 03:28:48.822208 11 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-named-port\" portCount=1\nI0615 03:28:48.972429 11 service.go:322] \"Service updated ports\" service=\"endpointslice-4880/example-no-match\" portCount=1\nI0615 03:28:49.735284 11 service.go:437] \"Adding new service port\" portName=\"endpointslice-4880/example-named-port:http\" servicePort=\"172.20.17.236:80/TCP\"\nI0615 03:28:49.735331 11 service.go:437] \"Adding new service port\" portName=\"endpointslice-4880/example-no-match:example-no-match\" servicePort=\"172.20.4.63:80/TCP\"\nI0615 03:28:49.735381 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:49.793851 11 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=12 numFilterChains=4 numFilterRules=6 numNATChains=26 numNATRules=57\nI0615 03:28:49.800012 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"64.833433ms\"\nI0615 03:28:52.001358 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:52.026601 11 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=13 numFilterChains=4 numFilterRules=5 numNATChains=28 numNATRules=61\nI0615 03:28:52.030374 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.068006ms\"\nI0615 03:28:52.403440 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:52.428704 11 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=14 numFilterChains=4 numFilterRules=4 numNATChains=30 numNATRules=65\nI0615 03:28:52.433269 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.895183ms\"\nI0615 03:28:53.433542 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:53.459348 11 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=68\nI0615 03:28:53.463815 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"30.386307ms\"\nI0615 03:28:58.142399 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:58.168204 11 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=32 numNATRules=71\nI0615 03:28:58.174758 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"32.435765ms\"\nI0615 03:28:59.538259 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:28:59.594806 11 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=16 numFilterChains=4 numFilterRules=4 numNATChains=32 numNATRules=69\nI0615 03:28:59.628814 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"90.620279ms\"\nI0615 03:29:00.542021 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:00.571691 11 proxier.go:1464] \"Reloading service iptables data\" numServices=10 numEndpoints=15 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=68\nI0615 03:29:00.576185 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"34.246334ms\"\nI0615 03:29:01.768099 11 service.go:322] \"Service updated ports\" service=\"services-8063/nodeport-update-service\" portCount=0\nI0615 03:29:01.768149 11 service.go:462] \"Removing service port\" portName=\"services-8063/nodeport-update-service:tcp-port\"\nI0615 03:29:01.768160 11 service.go:462] \"Removing service port\" portName=\"services-8063/nodeport-update-service:udp-port\"\nI0615 03:29:01.768205 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:01.799554 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=31 numNATRules=56\nI0615 03:29:01.812720 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"44.569941ms\"\nI0615 03:29:01.812838 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:01.844397 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=11 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=48\nI0615 03:29:01.848158 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"35.3972ms\"\nI0615 03:29:10.436976 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:10.461143 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=46\nI0615 03:29:10.466239 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"29.326188ms\"\nI0615 03:29:10.583284 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:10.607347 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=9 numFilterChains=4 numFilterRules=5 numNATChains=22 numNATRules=43\nI0615 03:29:10.611389 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"28.165512ms\"\nI0615 03:29:11.438192 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:11.499523 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=22 numNATRules=45\nI0615 03:29:11.512347 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"74.194969ms\"\nI0615 03:29:12.351940 11 service.go:322] \"Service updated ports\" service=\"conntrack-5332/svc-udp\" portCount=0\nI0615 03:29:12.512928 11 service.go:462] \"Removing service port\" portName=\"conntrack-5332/svc-udp:udp\"\nI0615 03:29:12.513038 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:12.545890 11 proxier.go:1464] \"Reloading service iptables data\" numServices=7 numEndpoints=10 numFilterChains=4 numFilterRules=4 numNATChains=23 numNATRules=44\nI0615 03:29:12.555198 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"42.311753ms\"\nI0615 03:29:19.618134 11 service.go:322] \"Service updated ports\" service=\"conntrack-6419/svc-udp\" portCount=1\nI0615 03:29:19.618188 11 service.go:437] \"Adding new service port\" portName=\"conntrack-6419/svc-udp:udp\" servicePort=\"172.20.22.210:80/UDP\"\nI0615 03:29:19.618249 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:19.732829 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:29:19.751273 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"133.080647ms\"\nI0615 03:29:19.751349 11 proxier.go:853] \"Syncing iptables rules\"\nI0615 03:29:19.923997 11 proxier.go:1464] \"Reloading service iptables data\" numServices=8 numEndpoints=10 numFilterChains=4 numFilterRules=5 numNATChains=20 numNATRules=41\nI0615 03:29:19.956516 11 proxier.go:820] \"SyncProxyRules complete\" elapsed=\"205.197187ms\"\nI0615 03:29:22.793881 11 service.go:322] \"Service updated ports\" service=\"webhook-3549/e2e-test-webhook\" portCount=1\nI0615 03:29:22.793929 11 service.go:437] \"Adding new service port\" portName=\"webhook-3549/e2e-test-webhook\" servicePort=\"172.20.4.113:8443/TCP\