Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 1h51m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc000623938>: { error: <*errors.withMessage | 0xc000b0a540>{ cause: <*errors.errorString | 0xc0016d14a0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1ad386a, 0x1b13d28, 0x73c2fa, 0x73bcc5, 0x73b3bb, 0x741149, 0x740b27, 0x761fe5, 0x761d05, 0x761545, 0x7637f2, 0x76f9a5, 0x76f7be, 0x1b2e6d1, 0x5156c2, 0x46b2c1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-ug6xtg INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-ug6xtg" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-i77nai" using the "upgrades" template (Kubernetes v1.22.9, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-i77nai --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster configmap/cni-k8s-upgrade-and-conformance-i77nai-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-mp-0-config created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-md-0 created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-md-0 created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-mp-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-control-plane created dockercluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-dmp-0 created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-i77nai-md-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-ug6xtg/k8s-upgrade-and-conformance-i77nai-control-plane to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-ug6xtg/k8s-upgrade-and-conformance-i77nai-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Kubernetes control-plane INFO: Patching the new kubernetes version to KCP INFO: Waiting for control-plane machines to have the upgraded kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.6 INFO: Waiting for kube-proxy to have the upgraded kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag STEP: Upgrading the machine deployment INFO: Patching the new kubernetes version to Machine Deployment k8s-upgrade-and-conformance-ug6xtg/k8s-upgrade-and-conformance-i77nai-md-0 INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-ug6xtg/k8s-upgrade-and-conformance-i77nai-md-0 to be upgraded from v1.22.9 to v1.23.6 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.6 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-ug6xtg/k8s-upgrade-and-conformance-i77nai-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-ug6xtg/k8s-upgrade-and-conformance-i77nai-mp-0 to be upgraded from v1.22.9 to v1.23.6 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.6 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1651326480�[0m - Will randomize all specs Will run �[1m7044�[0m specs Running in parallel across �[1m4�[0m nodes Apr 30 13:48:04.049: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:48:04.053: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 30 13:48:04.075: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 30 13:48:04.102: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 30 13:48:04.102: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 30 13:48:04.102: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 30 13:48:04.106: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 30 13:48:04.106: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 30 13:48:04.106: INFO: e2e test version: v1.23.6 Apr 30 13:48:04.108: INFO: kube-apiserver version: v1.23.6 Apr 30 13:48:04.108: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:48:04.112: INFO: Cluster IP family: ipv4 Apr 30 13:48:04.142: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:48:04.159: INFO: Cluster IP family: ipv4 Apr 30 13:48:04.151: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:48:04.166: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 30 13:48:04.258: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:48:04.271: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:04.328: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch W0430 13:48:04.345352 18 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 30 13:48:04.345: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Apr 30 13:48:04.360: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4122 81f4e989-e6cc-46b4-905b-971d9ca1a764 2091 0 2022-04-30 13:48:04 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-30 13:48:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 30 13:48:04.360: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4122 81f4e989-e6cc-46b4-905b-971d9ca1a764 2092 0 2022-04-30 13:48:04 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-30 13:48:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 30 13:48:04.369: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4122 81f4e989-e6cc-46b4-905b-971d9ca1a764 2093 0 2022-04-30 13:48:04 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-30 13:48:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 30 13:48:04.369: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4122 81f4e989-e6cc-46b4-905b-971d9ca1a764 2094 0 2022-04-30 13:48:04 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-30 13:48:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:04.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-4122" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":1,"skipped":31,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:04.401: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pod templates Apr 30 13:48:04.423: INFO: created test-podtemplate-1 Apr 30 13:48:04.425: INFO: created test-podtemplate-2 Apr 30 13:48:04.428: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Apr 30 13:48:04.433: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Apr 30 13:48:04.453: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:04.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-3651" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:04.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api W0430 13:48:04.153828 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 30 13:48:04.153: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:48:04.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b" in namespace "downward-api-6180" to be "Succeeded or Failed" Apr 30 13:48:04.175: INFO: Pod "downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855551ms Apr 30 13:48:06.180: INFO: Pod "downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008587087s Apr 30 13:48:08.280: INFO: Pod "downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108852812s Apr 30 13:48:10.284: INFO: Pod "downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112468224s Apr 30 13:48:12.288: INFO: Pod "downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11652541s �[1mSTEP�[0m: Saw pod success Apr 30 13:48:12.288: INFO: Pod "downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b" satisfied condition "Succeeded or Failed" Apr 30 13:48:12.290: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:48:12.442: INFO: Waiting for pod downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b to disappear Apr 30 13:48:12.516: INFO: Pod downwardapi-volume-b9f79c0f-0b92-45d4-9077-f7913bac752b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:12.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6180" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:04.483: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:48:04.513: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 30 13:48:09.516: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 30 13:48:09.516: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 13:48:15.545: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1271 aabac973-0c66-4a66-b9f1-bb5e8c8e2d0b 2266 1 2022-04-30 13:48:09 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-30 13:48:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:48:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00429ef58 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-30 13:48:09 +0000 UTC,LastTransitionTime:2022-04-30 13:48:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-56cd759769" has successfully progressed.,LastUpdateTime:2022-04-30 13:48:13 +0000 UTC,LastTransitionTime:2022-04-30 13:48:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 30 13:48:15.547: INFO: New ReplicaSet "test-cleanup-deployment-56cd759769" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-56cd759769 deployment-1271 fdff432b-ea6a-4109-b279-948f9f276210 2255 1 2022-04-30 13:48:09 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:56cd759769] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment aabac973-0c66-4a66-b9f1-bb5e8c8e2d0b 0xc00406b167 0xc00406b168}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:48:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aabac973-0c66-4a66-b9f1-bb5e8c8e2d0b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:48:13 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 56cd759769,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:56cd759769] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00406b218 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:48:15.550: INFO: Pod "test-cleanup-deployment-56cd759769-whmms" is available: &Pod{ObjectMeta:{test-cleanup-deployment-56cd759769-whmms test-cleanup-deployment-56cd759769- deployment-1271 715dd7e7-aa32-4bfd-8875-d82e0b35f6fb 2254 0 2022-04-30 13:48:09 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:56cd759769] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-56cd759769 fdff432b-ea6a-4109-b279-948f9f276210 0xc00406b587 0xc00406b588}] [] [{kube-controller-manager Update v1 2022-04-30 13:48:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdff432b-ea6a-4109-b279-948f9f276210\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.3\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7k7zp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7k7zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.3,StartTime:2022-04-30 13:48:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:48:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://0e7d44916574a7703ad3b91ad5524d848c4dc8426aab8ac35b9974082c7c6b97,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:15.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-1271" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":3,"skipped":57,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:12.560: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-75/configmap-test-f9e869dc-4187-4e87-b861-82c81ce09a48 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 13:48:12.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1" in namespace "configmap-75" to be "Succeeded or Failed" Apr 30 13:48:12.616: INFO: Pod "pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.078432ms Apr 30 13:48:14.620: INFO: Pod "pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023129487s Apr 30 13:48:16.624: INFO: Pod "pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026350372s �[1mSTEP�[0m: Saw pod success Apr 30 13:48:16.624: INFO: Pod "pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1" satisfied condition "Succeeded or Failed" Apr 30 13:48:16.627: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1 container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:48:16.638: INFO: Waiting for pod pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1 to disappear Apr 30 13:48:16.640: INFO: Pod pod-configmaps-79154ed2-25c6-4408-9772-d3e613fa6ef1 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:16.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-75" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:04.226: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job W0430 13:48:04.245695 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 30 13:48:04.245: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:18.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-4563" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":1,"skipped":31,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:16.668: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating pod Apr 30 13:48:16.700: INFO: The status of Pod pod-hostip-8704a408-7306-4156-b1dd-bed35e3810c0 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:18.704: INFO: The status of Pod pod-hostip-8704a408-7306-4156-b1dd-bed35e3810c0 is Running (Ready = true) Apr 30 13:48:18.709: INFO: Pod pod-hostip-8704a408-7306-4156-b1dd-bed35e3810c0 has hostIP: 172.18.0.6 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:18.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4307" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:15.610: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Apr 30 13:48:15.642: INFO: Waiting up to 5m0s for pod "pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff" in namespace "emptydir-5029" to be "Succeeded or Failed" Apr 30 13:48:15.644: INFO: Pod "pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153385ms Apr 30 13:48:17.648: INFO: Pod "pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005600044s Apr 30 13:48:19.652: INFO: Pod "pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010027415s �[1mSTEP�[0m: Saw pod success Apr 30 13:48:19.652: INFO: Pod "pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff" satisfied condition "Succeeded or Failed" Apr 30 13:48:19.655: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:48:19.687: INFO: Waiting for pod pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff to disappear Apr 30 13:48:19.692: INFO: Pod pod-bfa66c0b-e41c-4ce8-aa59-07da132935ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:19.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5029" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:18.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:48:18.336: INFO: The status of Pod busybox-scheduling-4cbb0d5e-cdf7-4b4e-9964-71cc16381755 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:20.340: INFO: The status of Pod busybox-scheduling-4cbb0d5e-cdf7-4b4e-9964-71cc16381755 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:20.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-920" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:20.391: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:20.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-753" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":3,"skipped":62,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:20.525: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:20.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6615" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:18.739: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:48:19.458: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:48:22.481: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:22.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4187" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4187-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":99,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:19.701: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Apr 30 13:48:19.732: INFO: Waiting up to 5m0s for pod "pod-11472d6e-a13e-4ca2-9075-cfbcace12322" in namespace "emptydir-5892" to be "Succeeded or Failed" Apr 30 13:48:19.734: INFO: Pod "pod-11472d6e-a13e-4ca2-9075-cfbcace12322": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108217ms Apr 30 13:48:21.740: INFO: Pod "pod-11472d6e-a13e-4ca2-9075-cfbcace12322": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007978362s Apr 30 13:48:23.754: INFO: Pod "pod-11472d6e-a13e-4ca2-9075-cfbcace12322": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021554897s �[1mSTEP�[0m: Saw pod success Apr 30 13:48:23.754: INFO: Pod "pod-11472d6e-a13e-4ca2-9075-cfbcace12322" satisfied condition "Succeeded or Failed" Apr 30 13:48:23.760: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx pod pod-11472d6e-a13e-4ca2-9075-cfbcace12322 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:48:23.789: INFO: Waiting for pod pod-11472d6e-a13e-4ca2-9075-cfbcace12322 to disappear Apr 30 13:48:23.791: INFO: Pod pod-11472d6e-a13e-4ca2-9075-cfbcace12322 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:23.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5892" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":99,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":4,"skipped":114,"failed":0} [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:20.563: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pdb that targets all three pods in a test replica set �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: First trying to evict a pod which shouldn't be evictable �[1mSTEP�[0m: Waiting for all pods to be running Apr 30 13:48:22.627: INFO: pods: 0 < 3 Apr 30 13:48:24.631: INFO: running pods: 2 < 3 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Updating the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: Waiting for the pdb to observed all healthy pods �[1mSTEP�[0m: Patching the pdb to disallow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running Apr 30 13:48:28.874: INFO: running pods: 2 < 3 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Deleting the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be deleted �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:30.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-8367" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":5,"skipped":114,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:30.951: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:48:30.979: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 30 13:48:30.987: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 30 13:48:35.993: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 30 13:48:35.993: INFO: Creating deployment "test-rolling-update-deployment" Apr 30 13:48:35.999: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 30 13:48:36.005: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 30 13:48:38.012: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 30 13:48:38.014: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 13:48:38.022: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6885 3b095001-9470-44f6-b159-79cd9a7fd5dd 2913 1 2022-04-30 13:48:35 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-04-30 13:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:48:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003bcf4a8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-30 13:48:36 +0000 UTC,LastTransitionTime:2022-04-30 13:48:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-796dbc4547" has successfully progressed.,LastUpdateTime:2022-04-30 13:48:37 +0000 UTC,LastTransitionTime:2022-04-30 13:48:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 30 13:48:38.025: INFO: New ReplicaSet "test-rolling-update-deployment-796dbc4547" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-796dbc4547 deployment-6885 e106f910-91ad-44e7-b3b9-67663dea06a0 2897 1 2022-04-30 13:48:36 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 3b095001-9470-44f6-b159-79cd9a7fd5dd 0xc003bcf977 0xc003bcf978}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3b095001-9470-44f6-b159-79cd9a7fd5dd\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:48:37 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 796dbc4547,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003bcfa28 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:48:38.025: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 30 13:48:38.025: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6885 b999018c-b79c-4164-ab0b-6a0703e4550c 2912 2 2022-04-30 13:48:30 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 3b095001-9470-44f6-b159-79cd9a7fd5dd 0xc003bcf84f 0xc003bcf860}] [] [{e2e.test Update apps/v1 2022-04-30 13:48:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:48:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3b095001-9470-44f6-b159-79cd9a7fd5dd\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:48:37 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003bcf918 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:48:38.030: INFO: Pod "test-rolling-update-deployment-796dbc4547-5p2fd" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-796dbc4547-5p2fd test-rolling-update-deployment-796dbc4547- deployment-6885 4c412585-99bd-4e8c-8f5f-f1b4a21a17e7 2896 0 2022-04-30 13:48:36 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-796dbc4547 e106f910-91ad-44e7-b3b9-67663dea06a0 0xc003a4fe47 0xc003a4fe48}] [] [{kube-controller-manager Update v1 2022-04-30 13:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e106f910-91ad-44e7-b3b9-67663dea06a0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:48:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xbkxt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xbkxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:48:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.9,StartTime:2022-04-30 13:48:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:48:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://d4b27daa54884a07561f4191ff8d41862812a4ac96f3269e56bde2f9c5d426ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:38.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6885" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":124,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:23.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-8367 [It] should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:48:23.859: INFO: Found 0 stateful pods, waiting for 1 Apr 30 13:48:33.864: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: patching the StatefulSet Apr 30 13:48:33.884: INFO: Found 1 stateful pods, waiting for 2 Apr 30 13:48:43.890: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 30 13:48:43.890: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Listing all StatefulSets �[1mSTEP�[0m: Delete all of the StatefulSets �[1mSTEP�[0m: Verify that StatefulSets have been deleted [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 30 13:48:43.906: INFO: Deleting all statefulset in ns statefulset-8367 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:43.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8367" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":6,"skipped":102,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:43.984: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 30 13:48:44.039: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 30 13:48:44.043: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 30 13:48:44.053: INFO: waiting for watch events with expected annotations Apr 30 13:48:44.053: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:44.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-7423" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":7,"skipped":138,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:38.059: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 30 13:48:38.101: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:40.106: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 30 13:48:40.117: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:42.121: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 30 13:48:42.132: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 30 13:48:42.139: INFO: Pod pod-with-poststart-http-hook still exists Apr 30 13:48:44.140: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 30 13:48:44.144: INFO: Pod pod-with-poststart-http-hook still exists Apr 30 13:48:46.139: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 30 13:48:46.143: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:46.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-216" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":132,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:44.131: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-map-3910eced-3372-4b9f-8bdc-e290974cb6ca �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 13:48:44.158: INFO: Waiting up to 5m0s for pod "pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59" in namespace "secrets-6014" to be "Succeeded or Failed" Apr 30 13:48:44.160: INFO: Pod "pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59": Phase="Pending", Reason="", readiness=false. Elapsed: 1.857631ms Apr 30 13:48:46.167: INFO: Pod "pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008891783s Apr 30 13:48:48.171: INFO: Pod "pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012700856s �[1mSTEP�[0m: Saw pod success Apr 30 13:48:48.171: INFO: Pod "pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59" satisfied condition "Succeeded or Failed" Apr 30 13:48:48.175: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:48:48.190: INFO: Waiting for pod pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59 to disappear Apr 30 13:48:48.192: INFO: Pod pod-secrets-0af967cd-1b27-424e-8ca8-7fd98b167f59 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:48.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6014" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":161,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:22.663: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-779 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-779 I0430 13:48:22.720374 21 runners.go:193] Created replication controller with name: externalname-service, namespace: services-779, replica count: 2 I0430 13:48:25.772521 21 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 30 13:48:25.772: INFO: Creating new exec pod Apr 30 13:48:30.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 30 13:48:36.121: INFO: rc: 1 Apr 30 13:48:36.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 externalname-service 80 nc: getaddrinfo: Try again command terminated with exit code 1 error: exit status 1 Retrying... Apr 30 13:48:37.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 30 13:48:42.332: INFO: rc: 1 Apr 30 13:48:42.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 externalname-service 80 nc: getaddrinfo: Try again command terminated with exit code 1 error: exit status 1 Retrying... Apr 30 13:48:43.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 30 13:48:43.293: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 30 13:48:43.293: INFO: stdout: "externalname-service-782gt" Apr 30 13:48:43.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.136.19 80' Apr 30 13:48:43.434: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.136.19 80\nConnection to 10.133.136.19 80 port [tcp/http] succeeded!\n" Apr 30 13:48:43.434: INFO: stdout: "" Apr 30 13:48:44.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.136.19 80' Apr 30 13:48:44.586: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.136.19 80\nConnection to 10.133.136.19 80 port [tcp/http] succeeded!\n" Apr 30 13:48:44.586: INFO: stdout: "" Apr 30 13:48:45.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.136.19 80' Apr 30 13:48:45.575: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.136.19 80\nConnection to 10.133.136.19 80 port [tcp/http] succeeded!\n" Apr 30 13:48:45.575: INFO: stdout: "" Apr 30 13:48:46.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.136.19 80' Apr 30 13:48:46.564: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.136.19 80\nConnection to 10.133.136.19 80 port [tcp/http] succeeded!\n" Apr 30 13:48:46.564: INFO: stdout: "externalname-service-g29qc" Apr 30 13:48:46.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 32684' Apr 30 13:48:46.737: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 32684\nConnection to 172.18.0.4 32684 port [tcp/*] succeeded!\n" Apr 30 13:48:46.737: INFO: stdout: "externalname-service-782gt" Apr 30 13:48:46.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 32684' Apr 30 13:48:46.958: INFO: stderr: "+ + ncecho -v -t hostName -w 2\n 172.18.0.7 32684\nConnection to 172.18.0.7 32684 port [tcp/*] succeeded!\n" Apr 30 13:48:46.958: INFO: stdout: "" Apr 30 13:48:47.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 32684' Apr 30 13:48:48.116: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 32684\nConnection to 172.18.0.7 32684 port [tcp/*] succeeded!\n" Apr 30 13:48:48.116: INFO: stdout: "" Apr 30 13:48:48.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-779 exec execpodmllnn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 32684' Apr 30 13:48:49.130: INFO: stderr: "+ + echonc hostName -v\n -t -w 2 172.18.0.7 32684\nConnection to 172.18.0.7 32684 port [tcp/*] succeeded!\n" Apr 30 13:48:49.130: INFO: stdout: "externalname-service-g29qc" Apr 30 13:48:49.130: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:49.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-779" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:46.153: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:48:46.471: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:48:49.489: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:49.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-745" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-745-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":8,"skipped":133,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:48.222: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:48:48.253: INFO: The status of Pod busybox-readonly-fs2db8ed88-0347-4c4b-8f30-65fd3b208931 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:50.257: INFO: The status of Pod busybox-readonly-fs2db8ed88-0347-4c4b-8f30-65fd3b208931 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:50.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-7308" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":175,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:50.292: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating cluster-info Apr 30 13:48:50.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7450 cluster-info' Apr 30 13:48:50.398: INFO: stderr: "" Apr 30 13:48:50.398: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:50.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7450" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":10,"skipped":185,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:49.683: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-7b78fb93-d4c7-42bf-8db6-9705579912b7 �[1mSTEP�[0m: Creating the pod Apr 30 13:48:49.729: INFO: The status of Pod pod-projected-configmaps-5707567c-2fef-4bd5-9d26-df649ccd89fd is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:51.733: INFO: The status of Pod pod-projected-configmaps-5707567c-2fef-4bd5-9d26-df649ccd89fd is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-7b78fb93-d4c7-42bf-8db6-9705579912b7 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:53.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6979" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":143,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:50.442: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Apr 30 13:48:50.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 create -f -' Apr 30 13:48:51.342: INFO: stderr: "" Apr 30 13:48:51.342: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 30 13:48:51.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 30 13:48:51.426: INFO: stderr: "" Apr 30 13:48:51.426: INFO: stdout: "update-demo-nautilus-8vrb6 update-demo-nautilus-95glz " Apr 30 13:48:51.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods update-demo-nautilus-8vrb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 30 13:48:51.493: INFO: stderr: "" Apr 30 13:48:51.493: INFO: stdout: "" Apr 30 13:48:51.493: INFO: update-demo-nautilus-8vrb6 is created but not running Apr 30 13:48:56.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 30 13:48:56.562: INFO: stderr: "" Apr 30 13:48:56.562: INFO: stdout: "update-demo-nautilus-8vrb6 update-demo-nautilus-95glz " Apr 30 13:48:56.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods update-demo-nautilus-8vrb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 30 13:48:56.635: INFO: stderr: "" Apr 30 13:48:56.635: INFO: stdout: "true" Apr 30 13:48:56.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods update-demo-nautilus-8vrb6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 30 13:48:56.704: INFO: stderr: "" Apr 30 13:48:56.704: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 30 13:48:56.704: INFO: validating pod update-demo-nautilus-8vrb6 Apr 30 13:48:56.709: INFO: got data: { "image": "nautilus.jpg" } Apr 30 13:48:56.709: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 30 13:48:56.709: INFO: update-demo-nautilus-8vrb6 is verified up and running Apr 30 13:48:56.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods update-demo-nautilus-95glz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 30 13:48:56.788: INFO: stderr: "" Apr 30 13:48:56.788: INFO: stdout: "true" Apr 30 13:48:56.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods update-demo-nautilus-95glz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 30 13:48:56.868: INFO: stderr: "" Apr 30 13:48:56.868: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 30 13:48:56.868: INFO: validating pod update-demo-nautilus-95glz Apr 30 13:48:56.872: INFO: got data: { "image": "nautilus.jpg" } Apr 30 13:48:56.872: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 30 13:48:56.872: INFO: update-demo-nautilus-95glz is verified up and running �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:48:56.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 delete --grace-period=0 --force -f -' Apr 30 13:48:56.955: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:48:56.955: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 30 13:48:56.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get rc,svc -l name=update-demo --no-headers' Apr 30 13:48:57.054: INFO: stderr: "No resources found in kubectl-8768 namespace.\n" Apr 30 13:48:57.054: INFO: stdout: "" Apr 30 13:48:57.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 30 13:48:57.162: INFO: stderr: "" Apr 30 13:48:57.162: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:48:57.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8768" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":11,"skipped":212,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:53.808: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:00.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-5863" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":10,"skipped":178,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:57.257: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Apr 30 13:48:57.277: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:01.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-1630" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":12,"skipped":265,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:01.804: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:01.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1449" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":13,"skipped":277,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:00.951: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:49:00.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7" in namespace "projected-4733" to be "Succeeded or Failed" Apr 30 13:49:00.978: INFO: Pod "downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142501ms Apr 30 13:49:02.983: INFO: Pod "downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0068809s Apr 30 13:49:04.987: INFO: Pod "downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010713869s �[1mSTEP�[0m: Saw pod success Apr 30 13:49:04.987: INFO: Pod "downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7" satisfied condition "Succeeded or Failed" Apr 30 13:49:04.989: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:49:05.002: INFO: Waiting for pod downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7 to disappear Apr 30 13:49:05.004: INFO: Pod downwardapi-volume-865f5fc7-4d6e-43bb-98fc-f79de693a9c7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:05.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4733" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":243,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:01.870: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:49:01.895: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5673062d-c376-4755-9184-920cfe96e99f" in namespace "security-context-test-304" to be "Succeeded or Failed" Apr 30 13:49:01.897: INFO: Pod "busybox-privileged-false-5673062d-c376-4755-9184-920cfe96e99f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267453ms Apr 30 13:49:03.901: INFO: Pod "busybox-privileged-false-5673062d-c376-4755-9184-920cfe96e99f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005898604s Apr 30 13:49:05.905: INFO: Pod "busybox-privileged-false-5673062d-c376-4755-9184-920cfe96e99f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00963241s Apr 30 13:49:05.905: INFO: Pod "busybox-privileged-false-5673062d-c376-4755-9184-920cfe96e99f" satisfied condition "Succeeded or Failed" Apr 30 13:49:05.910: INFO: Got logs for pod "busybox-privileged-false-5673062d-c376-4755-9184-920cfe96e99f": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:05.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-304" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":302,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":5,"skipped":70,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:49.164: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Discovering how many secrets are in namespace by default �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Secret �[1mSTEP�[0m: Ensuring resource quota status captures secret creation �[1mSTEP�[0m: Deleting a secret �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:06.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-2825" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":6,"skipped":70,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:05.025: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Apr 30 13:49:05.047: INFO: Waiting up to 5m0s for pod "pod-811859d7-cb83-4f57-b695-29c5596c885e" in namespace "emptydir-1877" to be "Succeeded or Failed" Apr 30 13:49:05.051: INFO: Pod "pod-811859d7-cb83-4f57-b695-29c5596c885e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536813ms Apr 30 13:49:07.057: INFO: Pod "pod-811859d7-cb83-4f57-b695-29c5596c885e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009773433s Apr 30 13:49:09.061: INFO: Pod "pod-811859d7-cb83-4f57-b695-29c5596c885e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013483314s �[1mSTEP�[0m: Saw pod success Apr 30 13:49:09.061: INFO: Pod "pod-811859d7-cb83-4f57-b695-29c5596c885e" satisfied condition "Succeeded or Failed" Apr 30 13:49:09.063: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-811859d7-cb83-4f57-b695-29c5596c885e container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:49:09.075: INFO: Waiting for pod pod-811859d7-cb83-4f57-b695-29c5596c885e to disappear Apr 30 13:49:09.077: INFO: Pod pod-811859d7-cb83-4f57-b695-29c5596c885e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:09.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1877" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":252,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:05.963: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:09.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-7219" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":335,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:10.017: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted Apr 30 13:49:16.109: INFO: 80 pods remaining Apr 30 13:49:16.109: INFO: 80 pods has nil DeletionTimestamp Apr 30 13:49:16.109: INFO: Apr 30 13:49:17.130: INFO: 71 pods remaining Apr 30 13:49:17.130: INFO: 70 pods has nil DeletionTimestamp Apr 30 13:49:17.130: INFO: Apr 30 13:49:18.116: INFO: 60 pods remaining Apr 30 13:49:18.116: INFO: 60 pods has nil DeletionTimestamp Apr 30 13:49:18.116: INFO: Apr 30 13:49:19.114: INFO: 40 pods remaining Apr 30 13:49:19.115: INFO: 40 pods has nil DeletionTimestamp Apr 30 13:49:19.115: INFO: Apr 30 13:49:20.101: INFO: 31 pods remaining Apr 30 13:49:20.101: INFO: 31 pods has nil DeletionTimestamp Apr 30 13:49:20.101: INFO: Apr 30 13:49:21.101: INFO: 20 pods remaining Apr 30 13:49:21.101: INFO: 20 pods has nil DeletionTimestamp Apr 30 13:49:21.101: INFO: �[1mSTEP�[0m: Gathering metrics Apr 30 13:49:22.136: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-i77nai-control-plane-r7q6n is Running (Ready = true) Apr 30 13:49:22.415: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:22.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-2165" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":16,"skipped":345,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:09.109: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-sx48 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 30 13:49:09.138: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-sx48" in namespace "subpath-6752" to be "Succeeded or Failed" Apr 30 13:49:09.141: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344624ms Apr 30 13:49:11.150: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 2.011545315s Apr 30 13:49:13.157: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 4.017858679s Apr 30 13:49:15.166: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 6.027239865s Apr 30 13:49:17.194: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 8.055429156s Apr 30 13:49:19.208: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 10.0690583s Apr 30 13:49:21.213: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 12.073606397s Apr 30 13:49:23.217: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 14.078243418s Apr 30 13:49:25.224: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 16.084603841s Apr 30 13:49:27.229: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 18.089896577s Apr 30 13:49:29.232: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=true. Elapsed: 20.093211978s Apr 30 13:49:31.236: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Running", Reason="", readiness=false. Elapsed: 22.096551649s Apr 30 13:49:33.238: INFO: Pod "pod-subpath-test-projected-sx48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.099402551s �[1mSTEP�[0m: Saw pod success Apr 30 13:49:33.238: INFO: Pod "pod-subpath-test-projected-sx48" satisfied condition "Succeeded or Failed" Apr 30 13:49:33.241: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-subpath-test-projected-sx48 container test-container-subpath-projected-sx48: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:49:33.253: INFO: Waiting for pod pod-subpath-test-projected-sx48 to disappear Apr 30 13:49:33.256: INFO: Pod pod-subpath-test-projected-sx48 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-sx48 Apr 30 13:49:33.256: INFO: Deleting pod "pod-subpath-test-projected-sx48" in namespace "subpath-6752" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:33.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-6752" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":13,"skipped":271,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:06.265: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-6951 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 30 13:49:06.281: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 30 13:49:06.317: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:08.320: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:10.321: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:12.321: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:14.346: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:16.323: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:18.322: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:20.385: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:22.324: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:24.327: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:26.321: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:28.321: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:30.322: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:49:32.321: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 30 13:49:32.325: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 30 13:49:32.329: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 30 13:49:32.334: INFO: The status of Pod netserver-3 is Running (Ready = false) Apr 30 13:49:34.338: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 30 13:49:36.361: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 30 13:49:36.361: INFO: Going to poll 192.168.2.17 on port 8083 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:49:36.363: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.17:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6951 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:49:36.363: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:49:36.364: INFO: ExecWithOptions: Clientset creation Apr 30 13:49:36.364: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6951/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.17%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:49:36.448: INFO: Found all 1 expected endpoints: [netserver-0] Apr 30 13:49:36.448: INFO: Going to poll 192.168.0.10 on port 8083 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:49:36.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.0.10:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6951 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:49:36.451: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:49:36.451: INFO: ExecWithOptions: Clientset creation Apr 30 13:49:36.452: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6951/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.0.10%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:49:36.506: INFO: Found all 1 expected endpoints: [netserver-1] Apr 30 13:49:36.506: INFO: Going to poll 192.168.3.10 on port 8083 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:49:36.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.3.10:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6951 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:49:36.508: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:49:36.509: INFO: ExecWithOptions: Clientset creation Apr 30 13:49:36.509: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6951/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.3.10%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:49:36.580: INFO: Found all 1 expected endpoints: [netserver-2] Apr 30 13:49:36.580: INFO: Going to poll 192.168.6.13 on port 8083 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:49:36.583: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.6.13:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6951 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:49:36.583: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:49:36.584: INFO: ExecWithOptions: Clientset creation Apr 30 13:49:36.584: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6951/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.6.13%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:49:36.662: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:36.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-6951" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":98,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:33.516: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:49:33.547: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 30 13:49:38.552: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Scaling up "test-rs" replicaset Apr 30 13:49:38.559: INFO: Updating replica set "test-rs" �[1mSTEP�[0m: patching the ReplicaSet Apr 30 13:49:38.571: INFO: observed ReplicaSet test-rs in namespace replicaset-3541 with ReadyReplicas 1, AvailableReplicas 1 Apr 30 13:49:38.583: INFO: observed ReplicaSet test-rs in namespace replicaset-3541 with ReadyReplicas 1, AvailableReplicas 1 Apr 30 13:49:38.594: INFO: observed ReplicaSet test-rs in namespace replicaset-3541 with ReadyReplicas 1, AvailableReplicas 1 Apr 30 13:49:38.608: INFO: observed ReplicaSet test-rs in namespace replicaset-3541 with ReadyReplicas 1, AvailableReplicas 1 Apr 30 13:49:39.720: INFO: observed ReplicaSet test-rs in namespace replicaset-3541 with ReadyReplicas 2, AvailableReplicas 2 Apr 30 13:49:40.006: INFO: observed Replicaset test-rs in namespace replicaset-3541 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:40.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-3541" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":14,"skipped":451,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:40.029: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:49:40.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059" in namespace "projected-5031" to be "Succeeded or Failed" Apr 30 13:49:40.062: INFO: Pod "downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03356ms Apr 30 13:49:42.068: INFO: Pod "downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008255298s Apr 30 13:49:44.073: INFO: Pod "downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013106232s �[1mSTEP�[0m: Saw pod success Apr 30 13:49:44.073: INFO: Pod "downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059" satisfied condition "Succeeded or Failed" Apr 30 13:49:44.075: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:49:44.086: INFO: Waiting for pod downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059 to disappear Apr 30 13:49:44.088: INFO: Pod downwardapi-volume-27cd1adc-9aac-4712-8bf3-a4c1d40dc059 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:44.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5031" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":459,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:36.702: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7795.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7795.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7795.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7795.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 30 13:49:44.773: INFO: DNS probes using dns-7795/dns-test-0db6b8b4-2795-43ee-8e9e-3b358b8a58c6 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:44.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-7795" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":117,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:44.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:49:44.876: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-3334 I0430 13:49:44.883353 21 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3334, replica count: 1 I0430 13:49:45.934515 21 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 30 13:49:46.046: INFO: Created: latency-svc-lwttf Apr 30 13:49:46.060: INFO: Got endpoints: latency-svc-lwttf [24.825919ms] Apr 30 13:49:46.078: INFO: Created: latency-svc-j2bxf Apr 30 13:49:46.085: INFO: Created: latency-svc-vgcpg Apr 30 13:49:46.095: INFO: Got endpoints: latency-svc-j2bxf [34.678963ms] Apr 30 13:49:46.099: INFO: Got endpoints: latency-svc-vgcpg [37.864247ms] Apr 30 13:49:46.103: INFO: Created: latency-svc-c8hth Apr 30 13:49:46.112: INFO: Got endpoints: latency-svc-c8hth [51.217967ms] Apr 30 13:49:46.115: INFO: Created: latency-svc-ng698 Apr 30 13:49:46.131: INFO: Got endpoints: latency-svc-ng698 [69.414851ms] Apr 30 13:49:46.147: INFO: Created: latency-svc-vbqbv Apr 30 13:49:46.162: INFO: Created: latency-svc-5rwwb Apr 30 13:49:46.165: INFO: Got endpoints: latency-svc-vbqbv [102.870525ms] Apr 30 13:49:46.172: INFO: Got endpoints: latency-svc-5rwwb [110.307375ms] Apr 30 13:49:46.176: INFO: Created: latency-svc-jg9mr Apr 30 13:49:46.188: INFO: Got endpoints: latency-svc-jg9mr [125.825057ms] Apr 30 13:49:46.194: INFO: Created: latency-svc-k9cqm Apr 30 13:49:46.201: INFO: Got endpoints: latency-svc-k9cqm [138.686191ms] Apr 30 13:49:46.206: INFO: Created: latency-svc-j986k Apr 30 13:49:46.210: INFO: Got endpoints: latency-svc-j986k [147.713726ms] Apr 30 13:49:46.228: INFO: Created: latency-svc-sms8t Apr 30 13:49:46.230: INFO: Got endpoints: latency-svc-sms8t [167.064631ms] Apr 30 13:49:46.242: INFO: Created: latency-svc-tmhhw Apr 30 13:49:46.256: INFO: Got endpoints: latency-svc-tmhhw [193.451832ms] Apr 30 13:49:46.260: INFO: Created: latency-svc-4pw2v Apr 30 13:49:46.263: INFO: Got endpoints: latency-svc-4pw2v [200.221072ms] Apr 30 13:49:46.269: INFO: Created: latency-svc-fsq8v Apr 30 13:49:46.277: INFO: Got endpoints: latency-svc-fsq8v [215.531503ms] Apr 30 13:49:46.283: INFO: Created: latency-svc-d7p8v Apr 30 13:49:46.287: INFO: Got endpoints: latency-svc-d7p8v [223.573011ms] Apr 30 13:49:46.298: INFO: Created: latency-svc-l4tfj Apr 30 13:49:46.301: INFO: Got endpoints: latency-svc-l4tfj [238.154768ms] Apr 30 13:49:46.309: INFO: Created: latency-svc-m4mgf Apr 30 13:49:46.317: INFO: Got endpoints: latency-svc-m4mgf [221.429488ms] Apr 30 13:49:46.319: INFO: Created: latency-svc-jpgb5 Apr 30 13:49:46.325: INFO: Got endpoints: latency-svc-jpgb5 [226.405445ms] Apr 30 13:49:46.330: INFO: Created: latency-svc-g6vfm Apr 30 13:49:46.333: INFO: Got endpoints: latency-svc-g6vfm [220.355506ms] Apr 30 13:49:46.337: INFO: Created: latency-svc-hhzc2 Apr 30 13:49:46.343: INFO: Got endpoints: latency-svc-hhzc2 [211.634216ms] Apr 30 13:49:46.346: INFO: Created: latency-svc-lkwh4 Apr 30 13:49:46.354: INFO: Got endpoints: latency-svc-lkwh4 [189.046563ms] Apr 30 13:49:46.355: INFO: Created: latency-svc-h6g59 Apr 30 13:49:46.365: INFO: Got endpoints: latency-svc-h6g59 [192.963945ms] Apr 30 13:49:46.368: INFO: Created: latency-svc-l7rvm Apr 30 13:49:46.374: INFO: Got endpoints: latency-svc-l7rvm [185.496977ms] Apr 30 13:49:46.378: INFO: Created: latency-svc-7clrm Apr 30 13:49:46.382: INFO: Got endpoints: latency-svc-7clrm [181.452747ms] Apr 30 13:49:46.391: INFO: Created: latency-svc-nm9bd Apr 30 13:49:46.396: INFO: Got endpoints: latency-svc-nm9bd [184.784481ms] Apr 30 13:49:46.402: INFO: Created: latency-svc-gwv8j Apr 30 13:49:46.410: INFO: Got endpoints: latency-svc-gwv8j [180.097143ms] Apr 30 13:49:46.419: INFO: Created: latency-svc-tgg9m Apr 30 13:49:46.427: INFO: Created: latency-svc-gpjcs Apr 30 13:49:46.427: INFO: Got endpoints: latency-svc-tgg9m [171.236986ms] Apr 30 13:49:46.433: INFO: Got endpoints: latency-svc-gpjcs [169.201153ms] Apr 30 13:49:46.436: INFO: Created: latency-svc-t7kpc Apr 30 13:49:46.440: INFO: Got endpoints: latency-svc-t7kpc [163.279206ms] Apr 30 13:49:46.446: INFO: Created: latency-svc-n2xgt Apr 30 13:49:46.452: INFO: Got endpoints: latency-svc-n2xgt [165.414337ms] Apr 30 13:49:46.455: INFO: Created: latency-svc-v5gr2 Apr 30 13:49:46.458: INFO: Got endpoints: latency-svc-v5gr2 [157.246202ms] Apr 30 13:49:46.461: INFO: Created: latency-svc-2nth7 Apr 30 13:49:46.473: INFO: Got endpoints: latency-svc-2nth7 [155.861754ms] Apr 30 13:49:46.478: INFO: Created: latency-svc-kvvdm Apr 30 13:49:46.482: INFO: Got endpoints: latency-svc-kvvdm [156.772747ms] Apr 30 13:49:46.487: INFO: Created: latency-svc-2ntjk Apr 30 13:49:46.488: INFO: Got endpoints: latency-svc-2ntjk [155.230085ms] Apr 30 13:49:46.494: INFO: Created: latency-svc-qdqpr Apr 30 13:49:46.497: INFO: Got endpoints: latency-svc-qdqpr [153.756139ms] Apr 30 13:49:46.501: INFO: Created: latency-svc-vt6rw Apr 30 13:49:46.507: INFO: Got endpoints: latency-svc-vt6rw [153.712121ms] Apr 30 13:49:46.511: INFO: Created: latency-svc-l86xt Apr 30 13:49:46.514: INFO: Got endpoints: latency-svc-l86xt [149.066001ms] Apr 30 13:49:46.519: INFO: Created: latency-svc-tlhb4 Apr 30 13:49:46.528: INFO: Got endpoints: latency-svc-tlhb4 [154.528319ms] Apr 30 13:49:46.531: INFO: Created: latency-svc-bgrsl Apr 30 13:49:46.538: INFO: Got endpoints: latency-svc-bgrsl [155.393416ms] Apr 30 13:49:46.545: INFO: Created: latency-svc-dghjc Apr 30 13:49:46.551: INFO: Got endpoints: latency-svc-dghjc [154.583831ms] Apr 30 13:49:46.565: INFO: Created: latency-svc-9vw47 Apr 30 13:49:46.572: INFO: Created: latency-svc-z2wrv Apr 30 13:49:46.584: INFO: Created: latency-svc-44r22 Apr 30 13:49:46.595: INFO: Created: latency-svc-rlcqq Apr 30 13:49:46.606: INFO: Got endpoints: latency-svc-9vw47 [195.50533ms] Apr 30 13:49:46.610: INFO: Created: latency-svc-9f95t Apr 30 13:49:46.615: INFO: Created: latency-svc-bmnkq Apr 30 13:49:46.621: INFO: Created: latency-svc-plsbs Apr 30 13:49:46.630: INFO: Created: latency-svc-2xg8n Apr 30 13:49:46.637: INFO: Created: latency-svc-tbrxn Apr 30 13:49:46.642: INFO: Created: latency-svc-n5n29 Apr 30 13:49:46.649: INFO: Created: latency-svc-7gfcb Apr 30 13:49:46.651: INFO: Got endpoints: latency-svc-z2wrv [223.655189ms] Apr 30 13:49:46.658: INFO: Created: latency-svc-btwfc Apr 30 13:49:46.666: INFO: Created: latency-svc-bngfp Apr 30 13:49:46.674: INFO: Created: latency-svc-fc5fq Apr 30 13:49:46.683: INFO: Created: latency-svc-jpq9v Apr 30 13:49:46.700: INFO: Got endpoints: latency-svc-44r22 [266.819557ms] Apr 30 13:49:46.702: INFO: Created: latency-svc-5pbg7 Apr 30 13:49:46.709: INFO: Created: latency-svc-tszlt Apr 30 13:49:46.715: INFO: Created: latency-svc-j2xvd Apr 30 13:49:46.751: INFO: Got endpoints: latency-svc-rlcqq [310.722709ms] Apr 30 13:49:46.762: INFO: Created: latency-svc-gsgnp Apr 30 13:49:46.799: INFO: Got endpoints: latency-svc-9f95t [347.007932ms] Apr 30 13:49:46.812: INFO: Created: latency-svc-gqcww Apr 30 13:49:46.852: INFO: Got endpoints: latency-svc-bmnkq [392.828515ms] Apr 30 13:49:46.862: INFO: Created: latency-svc-slrjp Apr 30 13:49:46.906: INFO: Got endpoints: latency-svc-plsbs [432.875904ms] Apr 30 13:49:46.918: INFO: Created: latency-svc-j8kdc Apr 30 13:49:46.951: INFO: Got endpoints: latency-svc-2xg8n [468.572835ms] Apr 30 13:49:46.963: INFO: Created: latency-svc-jv9gc Apr 30 13:49:47.000: INFO: Got endpoints: latency-svc-tbrxn [511.535267ms] Apr 30 13:49:47.018: INFO: Created: latency-svc-hnqwq Apr 30 13:49:47.051: INFO: Got endpoints: latency-svc-n5n29 [553.893457ms] Apr 30 13:49:47.072: INFO: Created: latency-svc-jpwj4 Apr 30 13:49:47.103: INFO: Got endpoints: latency-svc-7gfcb [595.855102ms] Apr 30 13:49:47.130: INFO: Created: latency-svc-tw8zn Apr 30 13:49:47.153: INFO: Got endpoints: latency-svc-btwfc [638.224401ms] Apr 30 13:49:47.165: INFO: Created: latency-svc-54rfk Apr 30 13:49:47.200: INFO: Got endpoints: latency-svc-bngfp [671.398777ms] Apr 30 13:49:47.217: INFO: Created: latency-svc-p58rh Apr 30 13:49:47.250: INFO: Got endpoints: latency-svc-fc5fq [712.109641ms] Apr 30 13:49:47.268: INFO: Created: latency-svc-slpkn Apr 30 13:49:47.300: INFO: Got endpoints: latency-svc-jpq9v [748.604836ms] Apr 30 13:49:47.324: INFO: Created: latency-svc-dr6cx Apr 30 13:49:47.350: INFO: Got endpoints: latency-svc-5pbg7 [744.650978ms] Apr 30 13:49:47.362: INFO: Created: latency-svc-xpq25 Apr 30 13:49:47.400: INFO: Got endpoints: latency-svc-tszlt [749.379219ms] Apr 30 13:49:47.411: INFO: Created: latency-svc-mr78v Apr 30 13:49:47.449: INFO: Got endpoints: latency-svc-j2xvd [749.201654ms] Apr 30 13:49:47.462: INFO: Created: latency-svc-wsrbb Apr 30 13:49:47.502: INFO: Got endpoints: latency-svc-gsgnp [751.267639ms] Apr 30 13:49:47.515: INFO: Created: latency-svc-5lp75 Apr 30 13:49:47.558: INFO: Got endpoints: latency-svc-gqcww [758.284151ms] Apr 30 13:49:47.569: INFO: Created: latency-svc-lv5b6 Apr 30 13:49:47.600: INFO: Got endpoints: latency-svc-slrjp [748.291247ms] Apr 30 13:49:47.612: INFO: Created: latency-svc-h66fk Apr 30 13:49:47.650: INFO: Got endpoints: latency-svc-j8kdc [744.318137ms] Apr 30 13:49:47.671: INFO: Created: latency-svc-ktc2t Apr 30 13:49:47.699: INFO: Got endpoints: latency-svc-jv9gc [748.138184ms] Apr 30 13:49:47.710: INFO: Created: latency-svc-fxg2c Apr 30 13:49:47.751: INFO: Got endpoints: latency-svc-hnqwq [751.134003ms] Apr 30 13:49:47.768: INFO: Created: latency-svc-dxstv Apr 30 13:49:47.799: INFO: Got endpoints: latency-svc-jpwj4 [748.6663ms] Apr 30 13:49:47.811: INFO: Created: latency-svc-jjp6d Apr 30 13:49:47.850: INFO: Got endpoints: latency-svc-tw8zn [745.440018ms] Apr 30 13:49:47.863: INFO: Created: latency-svc-vcc8l Apr 30 13:49:47.900: INFO: Got endpoints: latency-svc-54rfk [747.182253ms] Apr 30 13:49:47.911: INFO: Created: latency-svc-hbn9f Apr 30 13:49:47.950: INFO: Got endpoints: latency-svc-p58rh [749.663564ms] Apr 30 13:49:47.960: INFO: Created: latency-svc-p92rl Apr 30 13:49:47.999: INFO: Got endpoints: latency-svc-slpkn [748.872446ms] Apr 30 13:49:48.011: INFO: Created: latency-svc-zm95g Apr 30 13:49:48.049: INFO: Got endpoints: latency-svc-dr6cx [749.587362ms] Apr 30 13:49:48.078: INFO: Created: latency-svc-4f9d8 Apr 30 13:49:48.101: INFO: Got endpoints: latency-svc-xpq25 [750.583226ms] Apr 30 13:49:48.119: INFO: Created: latency-svc-t2mr9 Apr 30 13:49:48.155: INFO: Got endpoints: latency-svc-mr78v [753.682393ms] Apr 30 13:49:48.169: INFO: Created: latency-svc-hws9v Apr 30 13:49:48.200: INFO: Got endpoints: latency-svc-wsrbb [750.507626ms] Apr 30 13:49:48.212: INFO: Created: latency-svc-k8gt7 Apr 30 13:49:48.249: INFO: Got endpoints: latency-svc-5lp75 [746.560022ms] Apr 30 13:49:48.263: INFO: Created: latency-svc-nbqq9 Apr 30 13:49:48.299: INFO: Got endpoints: latency-svc-lv5b6 [741.579819ms] Apr 30 13:49:48.316: INFO: Created: latency-svc-z9bd8 Apr 30 13:49:48.350: INFO: Got endpoints: latency-svc-h66fk [749.685476ms] Apr 30 13:49:48.361: INFO: Created: latency-svc-xnn4z Apr 30 13:49:48.405: INFO: Got endpoints: latency-svc-ktc2t [755.306338ms] Apr 30 13:49:48.417: INFO: Created: latency-svc-9pdjv Apr 30 13:49:48.449: INFO: Got endpoints: latency-svc-fxg2c [750.422126ms] Apr 30 13:49:48.462: INFO: Created: latency-svc-pjsgj Apr 30 13:49:48.500: INFO: Got endpoints: latency-svc-dxstv [748.260021ms] Apr 30 13:49:48.514: INFO: Created: latency-svc-z5j9f Apr 30 13:49:48.549: INFO: Got endpoints: latency-svc-jjp6d [749.504184ms] Apr 30 13:49:48.560: INFO: Created: latency-svc-m7qtk Apr 30 13:49:48.600: INFO: Got endpoints: latency-svc-vcc8l [749.932234ms] Apr 30 13:49:48.614: INFO: Created: latency-svc-hxw4s Apr 30 13:49:48.650: INFO: Got endpoints: latency-svc-hbn9f [749.722296ms] Apr 30 13:49:48.660: INFO: Created: latency-svc-kgv6m Apr 30 13:49:48.700: INFO: Got endpoints: latency-svc-p92rl [749.983126ms] Apr 30 13:49:48.713: INFO: Created: latency-svc-xc8mh Apr 30 13:49:48.749: INFO: Got endpoints: latency-svc-zm95g [750.373612ms] Apr 30 13:49:48.761: INFO: Created: latency-svc-rbcth Apr 30 13:49:48.800: INFO: Got endpoints: latency-svc-4f9d8 [750.70368ms] Apr 30 13:49:48.813: INFO: Created: latency-svc-fmr2w Apr 30 13:49:48.850: INFO: Got endpoints: latency-svc-t2mr9 [748.72349ms] Apr 30 13:49:48.867: INFO: Created: latency-svc-4vg78 Apr 30 13:49:48.901: INFO: Got endpoints: latency-svc-hws9v [746.638317ms] Apr 30 13:49:48.917: INFO: Created: latency-svc-h7vcn Apr 30 13:49:48.949: INFO: Got endpoints: latency-svc-k8gt7 [749.648825ms] Apr 30 13:49:48.977: INFO: Created: latency-svc-mqzr6 Apr 30 13:49:49.001: INFO: Got endpoints: latency-svc-nbqq9 [751.593607ms] Apr 30 13:49:49.013: INFO: Created: latency-svc-qqm2c Apr 30 13:49:49.050: INFO: Got endpoints: latency-svc-z9bd8 [750.471381ms] Apr 30 13:49:49.063: INFO: Created: latency-svc-qmjcs Apr 30 13:49:49.099: INFO: Got endpoints: latency-svc-xnn4z [749.181695ms] Apr 30 13:49:49.115: INFO: Created: latency-svc-ck5x9 Apr 30 13:49:49.155: INFO: Got endpoints: latency-svc-9pdjv [749.254871ms] Apr 30 13:49:49.170: INFO: Created: latency-svc-nkw2m Apr 30 13:49:49.200: INFO: Got endpoints: latency-svc-pjsgj [750.326238ms] Apr 30 13:49:49.212: INFO: Created: latency-svc-dfdfc Apr 30 13:49:49.249: INFO: Got endpoints: latency-svc-z5j9f [749.175105ms] Apr 30 13:49:49.264: INFO: Created: latency-svc-vb7ww Apr 30 13:49:49.299: INFO: Got endpoints: latency-svc-m7qtk [749.688987ms] Apr 30 13:49:49.310: INFO: Created: latency-svc-jgzn8 Apr 30 13:49:49.354: INFO: Got endpoints: latency-svc-hxw4s [753.486647ms] Apr 30 13:49:49.365: INFO: Created: latency-svc-wq5px Apr 30 13:49:49.400: INFO: Got endpoints: latency-svc-kgv6m [750.585276ms] Apr 30 13:49:49.411: INFO: Created: latency-svc-4qxcc Apr 30 13:49:49.449: INFO: Got endpoints: latency-svc-xc8mh [749.532638ms] Apr 30 13:49:49.463: INFO: Created: latency-svc-wrf77 Apr 30 13:49:49.499: INFO: Got endpoints: latency-svc-rbcth [750.039685ms] Apr 30 13:49:49.511: INFO: Created: latency-svc-dtwkg Apr 30 13:49:49.549: INFO: Got endpoints: latency-svc-fmr2w [749.377557ms] Apr 30 13:49:49.563: INFO: Created: latency-svc-8nrxx Apr 30 13:49:49.600: INFO: Got endpoints: latency-svc-4vg78 [750.07309ms] Apr 30 13:49:49.613: INFO: Created: latency-svc-smgzq Apr 30 13:49:49.650: INFO: Got endpoints: latency-svc-h7vcn [747.13887ms] Apr 30 13:49:49.661: INFO: Created: latency-svc-f4nwt Apr 30 13:49:49.700: INFO: Got endpoints: latency-svc-mqzr6 [750.641681ms] Apr 30 13:49:49.711: INFO: Created: latency-svc-x7vw2 Apr 30 13:49:49.749: INFO: Got endpoints: latency-svc-qqm2c [748.241479ms] Apr 30 13:49:49.760: INFO: Created: latency-svc-c4v6g Apr 30 13:49:49.799: INFO: Got endpoints: latency-svc-qmjcs [749.399857ms] Apr 30 13:49:49.809: INFO: Created: latency-svc-tsggb Apr 30 13:49:49.850: INFO: Got endpoints: latency-svc-ck5x9 [750.494452ms] Apr 30 13:49:49.861: INFO: Created: latency-svc-tbt6h Apr 30 13:49:49.898: INFO: Got endpoints: latency-svc-nkw2m [743.157449ms] Apr 30 13:49:49.908: INFO: Created: latency-svc-7kb5c Apr 30 13:49:49.948: INFO: Got endpoints: latency-svc-dfdfc [748.597486ms] Apr 30 13:49:49.959: INFO: Created: latency-svc-rdlxf Apr 30 13:49:49.998: INFO: Got endpoints: latency-svc-vb7ww [748.914345ms] Apr 30 13:49:50.010: INFO: Created: latency-svc-tmfgn Apr 30 13:49:50.050: INFO: Got endpoints: latency-svc-jgzn8 [750.270261ms] Apr 30 13:49:50.066: INFO: Created: latency-svc-4w2cf Apr 30 13:49:50.101: INFO: Got endpoints: latency-svc-wq5px [747.100276ms] Apr 30 13:49:50.117: INFO: Created: latency-svc-sd64d Apr 30 13:49:50.150: INFO: Got endpoints: latency-svc-4qxcc [749.884787ms] Apr 30 13:49:50.166: INFO: Created: latency-svc-pfrkp Apr 30 13:49:50.199: INFO: Got endpoints: latency-svc-wrf77 [749.935126ms] Apr 30 13:49:50.212: INFO: Created: latency-svc-dh95f Apr 30 13:49:50.249: INFO: Got endpoints: latency-svc-dtwkg [749.21158ms] Apr 30 13:49:50.258: INFO: Created: latency-svc-42cwk Apr 30 13:49:50.300: INFO: Got endpoints: latency-svc-8nrxx [750.377963ms] Apr 30 13:49:50.311: INFO: Created: latency-svc-nsnfg Apr 30 13:49:50.350: INFO: Got endpoints: latency-svc-smgzq [750.048436ms] Apr 30 13:49:50.366: INFO: Created: latency-svc-wx95w Apr 30 13:49:50.399: INFO: Got endpoints: latency-svc-f4nwt [748.590292ms] Apr 30 13:49:50.411: INFO: Created: latency-svc-p4thw Apr 30 13:49:50.450: INFO: Got endpoints: latency-svc-x7vw2 [749.69522ms] Apr 30 13:49:50.461: INFO: Created: latency-svc-2t97r Apr 30 13:49:50.499: INFO: Got endpoints: latency-svc-c4v6g [750.205088ms] Apr 30 13:49:50.511: INFO: Created: latency-svc-pmp66 Apr 30 13:49:50.550: INFO: Got endpoints: latency-svc-tsggb [750.756149ms] Apr 30 13:49:50.561: INFO: Created: latency-svc-wn6x7 Apr 30 13:49:50.599: INFO: Got endpoints: latency-svc-tbt6h [749.704873ms] Apr 30 13:49:50.611: INFO: Created: latency-svc-z82sq Apr 30 13:49:50.650: INFO: Got endpoints: latency-svc-7kb5c [751.892724ms] Apr 30 13:49:50.664: INFO: Created: latency-svc-lk6m2 Apr 30 13:49:50.700: INFO: Got endpoints: latency-svc-rdlxf [751.247205ms] Apr 30 13:49:50.711: INFO: Created: latency-svc-tcnr2 Apr 30 13:49:50.750: INFO: Got endpoints: latency-svc-tmfgn [751.286374ms] Apr 30 13:49:50.759: INFO: Created: latency-svc-pjx5l Apr 30 13:49:50.799: INFO: Got endpoints: latency-svc-4w2cf [749.079033ms] Apr 30 13:49:50.809: INFO: Created: latency-svc-zldhh Apr 30 13:49:50.850: INFO: Got endpoints: latency-svc-sd64d [749.019353ms] Apr 30 13:49:50.863: INFO: Created: latency-svc-smmhp Apr 30 13:49:50.900: INFO: Got endpoints: latency-svc-pfrkp [750.100645ms] Apr 30 13:49:50.911: INFO: Created: latency-svc-9gj7m Apr 30 13:49:50.949: INFO: Got endpoints: latency-svc-dh95f [750.08987ms] Apr 30 13:49:50.960: INFO: Created: latency-svc-gp8x7 Apr 30 13:49:51.001: INFO: Got endpoints: latency-svc-42cwk [752.401919ms] Apr 30 13:49:51.011: INFO: Created: latency-svc-k8nkb Apr 30 13:49:51.049: INFO: Got endpoints: latency-svc-nsnfg [749.438988ms] Apr 30 13:49:51.060: INFO: Created: latency-svc-qqlw6 Apr 30 13:49:51.099: INFO: Got endpoints: latency-svc-wx95w [749.209018ms] Apr 30 13:49:51.116: INFO: Created: latency-svc-gdjcc Apr 30 13:49:51.151: INFO: Got endpoints: latency-svc-p4thw [751.913952ms] Apr 30 13:49:51.167: INFO: Created: latency-svc-jzbr8 Apr 30 13:49:51.200: INFO: Got endpoints: latency-svc-2t97r [749.651824ms] Apr 30 13:49:51.210: INFO: Created: latency-svc-5dprp Apr 30 13:49:51.251: INFO: Got endpoints: latency-svc-pmp66 [751.679447ms] Apr 30 13:49:51.262: INFO: Created: latency-svc-scwhm Apr 30 13:49:51.300: INFO: Got endpoints: latency-svc-wn6x7 [749.995629ms] Apr 30 13:49:51.311: INFO: Created: latency-svc-8nqbq Apr 30 13:49:51.350: INFO: Got endpoints: latency-svc-z82sq [749.916033ms] Apr 30 13:49:51.372: INFO: Created: latency-svc-mk5w6 Apr 30 13:49:51.400: INFO: Got endpoints: latency-svc-lk6m2 [750.12501ms] Apr 30 13:49:51.410: INFO: Created: latency-svc-hjjqw Apr 30 13:49:51.449: INFO: Got endpoints: latency-svc-tcnr2 [749.219439ms] Apr 30 13:49:51.461: INFO: Created: latency-svc-kvwvr Apr 30 13:49:51.500: INFO: Got endpoints: latency-svc-pjx5l [750.605512ms] Apr 30 13:49:51.511: INFO: Created: latency-svc-5mjmc Apr 30 13:49:51.549: INFO: Got endpoints: latency-svc-zldhh [750.521668ms] Apr 30 13:49:51.584: INFO: Created: latency-svc-jz4dv Apr 30 13:49:51.599: INFO: Got endpoints: latency-svc-smmhp [749.516934ms] Apr 30 13:49:51.611: INFO: Created: latency-svc-rznkj Apr 30 13:49:51.650: INFO: Got endpoints: latency-svc-9gj7m [749.726473ms] Apr 30 13:49:51.661: INFO: Created: latency-svc-w67f8 Apr 30 13:49:51.699: INFO: Got endpoints: latency-svc-gp8x7 [749.32342ms] Apr 30 13:49:51.710: INFO: Created: latency-svc-mlk2l Apr 30 13:49:51.749: INFO: Got endpoints: latency-svc-k8nkb [747.485749ms] Apr 30 13:49:51.760: INFO: Created: latency-svc-77xx5 Apr 30 13:49:51.799: INFO: Got endpoints: latency-svc-qqlw6 [749.796246ms] Apr 30 13:49:51.811: INFO: Created: latency-svc-2vblp Apr 30 13:49:51.849: INFO: Got endpoints: latency-svc-gdjcc [749.850716ms] Apr 30 13:49:51.859: INFO: Created: latency-svc-dfc72 Apr 30 13:49:51.900: INFO: Got endpoints: latency-svc-jzbr8 [748.775562ms] Apr 30 13:49:51.909: INFO: Created: latency-svc-9449v Apr 30 13:49:51.949: INFO: Got endpoints: latency-svc-5dprp [749.71295ms] Apr 30 13:49:51.961: INFO: Created: latency-svc-5zsh2 Apr 30 13:49:52.003: INFO: Got endpoints: latency-svc-scwhm [752.069475ms] Apr 30 13:49:52.014: INFO: Created: latency-svc-d7ss6 Apr 30 13:49:52.049: INFO: Got endpoints: latency-svc-8nqbq [748.847438ms] Apr 30 13:49:52.062: INFO: Created: latency-svc-7c2km Apr 30 13:49:52.100: INFO: Got endpoints: latency-svc-mk5w6 [750.696504ms] Apr 30 13:49:52.117: INFO: Created: latency-svc-sx2nx Apr 30 13:49:52.151: INFO: Got endpoints: latency-svc-hjjqw [750.614426ms] Apr 30 13:49:52.160: INFO: Created: latency-svc-4mlkl Apr 30 13:49:52.199: INFO: Got endpoints: latency-svc-kvwvr [749.28246ms] Apr 30 13:49:52.214: INFO: Created: latency-svc-bskfg Apr 30 13:49:52.249: INFO: Got endpoints: latency-svc-5mjmc [748.831024ms] Apr 30 13:49:52.260: INFO: Created: latency-svc-knkmz Apr 30 13:49:52.299: INFO: Got endpoints: latency-svc-jz4dv [749.009621ms] Apr 30 13:49:52.310: INFO: Created: latency-svc-bb86g Apr 30 13:49:52.350: INFO: Got endpoints: latency-svc-rznkj [750.123067ms] Apr 30 13:49:52.363: INFO: Created: latency-svc-4tjlt Apr 30 13:49:52.400: INFO: Got endpoints: latency-svc-w67f8 [749.427344ms] Apr 30 13:49:52.411: INFO: Created: latency-svc-hvn4n Apr 30 13:49:52.450: INFO: Got endpoints: latency-svc-mlk2l [750.819897ms] Apr 30 13:49:52.459: INFO: Created: latency-svc-njtqn Apr 30 13:49:52.499: INFO: Got endpoints: latency-svc-77xx5 [750.804096ms] Apr 30 13:49:52.510: INFO: Created: latency-svc-kfz7k Apr 30 13:49:52.549: INFO: Got endpoints: latency-svc-2vblp [750.078409ms] Apr 30 13:49:52.560: INFO: Created: latency-svc-rrv5x Apr 30 13:49:52.599: INFO: Got endpoints: latency-svc-dfc72 [749.428981ms] Apr 30 13:49:52.611: INFO: Created: latency-svc-kmmnw Apr 30 13:49:52.649: INFO: Got endpoints: latency-svc-9449v [749.144311ms] Apr 30 13:49:52.661: INFO: Created: latency-svc-rf2pt Apr 30 13:49:52.699: INFO: Got endpoints: latency-svc-5zsh2 [749.750989ms] Apr 30 13:49:52.710: INFO: Created: latency-svc-t5x69 Apr 30 13:49:52.749: INFO: Got endpoints: latency-svc-d7ss6 [745.809731ms] Apr 30 13:49:52.759: INFO: Created: latency-svc-wt9qv Apr 30 13:49:52.800: INFO: Got endpoints: latency-svc-7c2km [750.770415ms] Apr 30 13:49:52.809: INFO: Created: latency-svc-9q5dx Apr 30 13:49:52.849: INFO: Got endpoints: latency-svc-sx2nx [748.289614ms] Apr 30 13:49:52.859: INFO: Created: latency-svc-n4llm Apr 30 13:49:52.899: INFO: Got endpoints: latency-svc-4mlkl [748.346757ms] Apr 30 13:49:52.911: INFO: Created: latency-svc-r48vc Apr 30 13:49:52.951: INFO: Got endpoints: latency-svc-bskfg [751.482412ms] Apr 30 13:49:52.959: INFO: Created: latency-svc-vwmml Apr 30 13:49:53.000: INFO: Got endpoints: latency-svc-knkmz [750.545783ms] Apr 30 13:49:53.010: INFO: Created: latency-svc-fjlsn Apr 30 13:49:53.049: INFO: Got endpoints: latency-svc-bb86g [750.138359ms] Apr 30 13:49:53.066: INFO: Created: latency-svc-wg68p Apr 30 13:49:53.101: INFO: Got endpoints: latency-svc-4tjlt [751.353997ms] Apr 30 13:49:53.113: INFO: Created: latency-svc-nm965 Apr 30 13:49:53.151: INFO: Got endpoints: latency-svc-hvn4n [750.471108ms] Apr 30 13:49:53.170: INFO: Created: latency-svc-xzsmn Apr 30 13:49:53.199: INFO: Got endpoints: latency-svc-njtqn [749.700593ms] Apr 30 13:49:53.211: INFO: Created: latency-svc-vqfkj Apr 30 13:49:53.250: INFO: Got endpoints: latency-svc-kfz7k [750.587189ms] Apr 30 13:49:53.273: INFO: Created: latency-svc-hql25 Apr 30 13:49:53.300: INFO: Got endpoints: latency-svc-rrv5x [750.117274ms] Apr 30 13:49:53.308: INFO: Created: latency-svc-7q5wc Apr 30 13:49:53.349: INFO: Got endpoints: latency-svc-kmmnw [750.679653ms] Apr 30 13:49:53.359: INFO: Created: latency-svc-jgvbz Apr 30 13:49:53.400: INFO: Got endpoints: latency-svc-rf2pt [751.290749ms] Apr 30 13:49:53.410: INFO: Created: latency-svc-sqtdm Apr 30 13:49:53.449: INFO: Got endpoints: latency-svc-t5x69 [749.713084ms] Apr 30 13:49:53.465: INFO: Created: latency-svc-wxvzp Apr 30 13:49:53.500: INFO: Got endpoints: latency-svc-wt9qv [750.639417ms] Apr 30 13:49:53.509: INFO: Created: latency-svc-h8zp9 Apr 30 13:49:53.549: INFO: Got endpoints: latency-svc-9q5dx [749.329561ms] Apr 30 13:49:53.559: INFO: Created: latency-svc-hgq4n Apr 30 13:49:53.599: INFO: Got endpoints: latency-svc-n4llm [750.591215ms] Apr 30 13:49:53.609: INFO: Created: latency-svc-82xr7 Apr 30 13:49:53.649: INFO: Got endpoints: latency-svc-r48vc [750.069264ms] Apr 30 13:49:53.658: INFO: Created: latency-svc-48k96 Apr 30 13:49:53.700: INFO: Got endpoints: latency-svc-vwmml [749.23805ms] Apr 30 13:49:53.709: INFO: Created: latency-svc-xmvgj Apr 30 13:49:53.750: INFO: Got endpoints: latency-svc-fjlsn [749.755643ms] Apr 30 13:49:53.758: INFO: Created: latency-svc-bqskt Apr 30 13:49:53.799: INFO: Got endpoints: latency-svc-wg68p [750.015289ms] Apr 30 13:49:53.810: INFO: Created: latency-svc-zgjbj Apr 30 13:49:53.849: INFO: Got endpoints: latency-svc-nm965 [748.041536ms] Apr 30 13:49:53.861: INFO: Created: latency-svc-bjjtl Apr 30 13:49:53.899: INFO: Got endpoints: latency-svc-xzsmn [748.589766ms] Apr 30 13:49:53.950: INFO: Got endpoints: latency-svc-vqfkj [750.638517ms] Apr 30 13:49:54.000: INFO: Got endpoints: latency-svc-hql25 [750.163022ms] Apr 30 13:49:54.049: INFO: Got endpoints: latency-svc-7q5wc [749.698231ms] Apr 30 13:49:54.101: INFO: Got endpoints: latency-svc-jgvbz [750.986141ms] Apr 30 13:49:54.153: INFO: Got endpoints: latency-svc-sqtdm [752.279983ms] Apr 30 13:49:54.205: INFO: Got endpoints: latency-svc-wxvzp [756.304367ms] Apr 30 13:49:54.250: INFO: Got endpoints: latency-svc-h8zp9 [749.903226ms] Apr 30 13:49:54.300: INFO: Got endpoints: latency-svc-hgq4n [750.002391ms] Apr 30 13:49:54.350: INFO: Got endpoints: latency-svc-82xr7 [750.129804ms] Apr 30 13:49:54.400: INFO: Got endpoints: latency-svc-48k96 [750.351435ms] Apr 30 13:49:54.450: INFO: Got endpoints: latency-svc-xmvgj [749.542926ms] Apr 30 13:49:54.499: INFO: Got endpoints: latency-svc-bqskt [749.585332ms] Apr 30 13:49:54.549: INFO: Got endpoints: latency-svc-zgjbj [750.085829ms] Apr 30 13:49:54.599: INFO: Got endpoints: latency-svc-bjjtl [749.856859ms] Apr 30 13:49:54.599: INFO: Latencies: [34.678963ms 37.864247ms 51.217967ms 69.414851ms 102.870525ms 110.307375ms 125.825057ms 138.686191ms 147.713726ms 149.066001ms 153.712121ms 153.756139ms 154.528319ms 154.583831ms 155.230085ms 155.393416ms 155.861754ms 156.772747ms 157.246202ms 163.279206ms 165.414337ms 167.064631ms 169.201153ms 171.236986ms 180.097143ms 181.452747ms 184.784481ms 185.496977ms 189.046563ms 192.963945ms 193.451832ms 195.50533ms 200.221072ms 211.634216ms 215.531503ms 220.355506ms 221.429488ms 223.573011ms 223.655189ms 226.405445ms 238.154768ms 266.819557ms 310.722709ms 347.007932ms 392.828515ms 432.875904ms 468.572835ms 511.535267ms 553.893457ms 595.855102ms 638.224401ms 671.398777ms 712.109641ms 741.579819ms 743.157449ms 744.318137ms 744.650978ms 745.440018ms 745.809731ms 746.560022ms 746.638317ms 747.100276ms 747.13887ms 747.182253ms 747.485749ms 748.041536ms 748.138184ms 748.241479ms 748.260021ms 748.289614ms 748.291247ms 748.346757ms 748.589766ms 748.590292ms 748.597486ms 748.604836ms 748.6663ms 748.72349ms 748.775562ms 748.831024ms 748.847438ms 748.872446ms 748.914345ms 749.009621ms 749.019353ms 749.079033ms 749.144311ms 749.175105ms 749.181695ms 749.201654ms 749.209018ms 749.21158ms 749.219439ms 749.23805ms 749.254871ms 749.28246ms 749.32342ms 749.329561ms 749.377557ms 749.379219ms 749.399857ms 749.427344ms 749.428981ms 749.438988ms 749.504184ms 749.516934ms 749.532638ms 749.542926ms 749.585332ms 749.587362ms 749.648825ms 749.651824ms 749.663564ms 749.685476ms 749.688987ms 749.69522ms 749.698231ms 749.700593ms 749.704873ms 749.71295ms 749.713084ms 749.722296ms 749.726473ms 749.750989ms 749.755643ms 749.796246ms 749.850716ms 749.856859ms 749.884787ms 749.903226ms 749.916033ms 749.932234ms 749.935126ms 749.983126ms 749.995629ms 750.002391ms 750.015289ms 750.039685ms 750.048436ms 750.069264ms 750.07309ms 750.078409ms 750.085829ms 750.08987ms 750.100645ms 750.117274ms 750.123067ms 750.12501ms 750.129804ms 750.138359ms 750.163022ms 750.205088ms 750.270261ms 750.326238ms 750.351435ms 750.373612ms 750.377963ms 750.422126ms 750.471108ms 750.471381ms 750.494452ms 750.507626ms 750.521668ms 750.545783ms 750.583226ms 750.585276ms 750.587189ms 750.591215ms 750.605512ms 750.614426ms 750.638517ms 750.639417ms 750.641681ms 750.679653ms 750.696504ms 750.70368ms 750.756149ms 750.770415ms 750.804096ms 750.819897ms 750.986141ms 751.134003ms 751.247205ms 751.267639ms 751.286374ms 751.290749ms 751.353997ms 751.482412ms 751.593607ms 751.679447ms 751.892724ms 751.913952ms 752.069475ms 752.279983ms 752.401919ms 753.486647ms 753.682393ms 755.306338ms 756.304367ms 758.284151ms] Apr 30 13:49:54.599: INFO: 50 %ile: 749.399857ms Apr 30 13:49:54.599: INFO: 90 %ile: 750.986141ms Apr 30 13:49:54.599: INFO: 99 %ile: 756.304367ms Apr 30 13:49:54.599: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:49:54.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-3334" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":9,"skipped":146,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:54.621: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:49:54.644: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cb00ace9-96fb-45a0-be11-19fcf609efcc" in namespace "security-context-test-4581" to be "Succeeded or Failed" Apr 30 13:49:54.647: INFO: Pod "alpine-nnp-false-cb00ace9-96fb-45a0-be11-19fcf609efcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.542304ms Apr 30 13:49:56.650: INFO: Pod "alpine-nnp-false-cb00ace9-96fb-45a0-be11-19fcf609efcc": Phase="Running", Reason="", readiness=true. Elapsed: 2.005847547s Apr 30 13:49:58.653: INFO: Pod "alpine-nnp-false-cb00ace9-96fb-45a0-be11-19fcf609efcc": Phase="Running", Reason="", readiness=false. Elapsed: 4.009390021s Apr 30 13:50:00.658: INFO: Pod "alpine-nnp-false-cb00ace9-96fb-45a0-be11-19fcf609efcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013423717s Apr 30 13:50:00.658: INFO: Pod "alpine-nnp-false-cb00ace9-96fb-45a0-be11-19fcf609efcc" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:00.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-4581" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":152,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:00.716: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/node.k8s.io �[1mSTEP�[0m: getting /apis/node.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: watching Apr 30 13:50:00.840: INFO: starting watch �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 30 13:50:00.871: INFO: waiting for watch events with expected annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:00.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-7111" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":11,"skipped":163,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:44.128: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-d7xw �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 30 13:49:44.166: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d7xw" in namespace "subpath-3158" to be "Succeeded or Failed" Apr 30 13:49:44.169: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47948ms Apr 30 13:49:46.176: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 2.009732462s Apr 30 13:49:48.180: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 4.014146463s Apr 30 13:49:50.184: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 6.017485971s Apr 30 13:49:52.187: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 8.021123456s Apr 30 13:49:54.192: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 10.026229059s Apr 30 13:49:56.196: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 12.029968579s Apr 30 13:49:58.201: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 14.034661792s Apr 30 13:50:00.205: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 16.038367931s Apr 30 13:50:02.212: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 18.045621365s Apr 30 13:50:04.217: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=true. Elapsed: 20.050901402s Apr 30 13:50:06.221: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Running", Reason="", readiness=false. Elapsed: 22.054986728s Apr 30 13:50:08.225: INFO: Pod "pod-subpath-test-configmap-d7xw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058834859s �[1mSTEP�[0m: Saw pod success Apr 30 13:50:08.225: INFO: Pod "pod-subpath-test-configmap-d7xw" satisfied condition "Succeeded or Failed" Apr 30 13:50:08.228: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-subpath-test-configmap-d7xw container test-container-subpath-configmap-d7xw: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:50:08.241: INFO: Waiting for pod pod-subpath-test-configmap-d7xw to disappear Apr 30 13:50:08.243: INFO: Pod pod-subpath-test-configmap-d7xw no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-d7xw Apr 30 13:50:08.243: INFO: Deleting pod "pod-subpath-test-configmap-d7xw" in namespace "subpath-3158" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:08.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-3158" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":16,"skipped":492,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:08.271: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-2895 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-2895 �[1mSTEP�[0m: Deleting pre-stop pod Apr 30 13:50:17.375: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:17.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-2895" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":17,"skipped":502,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:17.406: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:50:17.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c" in namespace "projected-1474" to be "Succeeded or Failed" Apr 30 13:50:17.447: INFO: Pod "downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.654963ms Apr 30 13:50:19.452: INFO: Pod "downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007514784s Apr 30 13:50:21.456: INFO: Pod "downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011918944s �[1mSTEP�[0m: Saw pod success Apr 30 13:50:21.456: INFO: Pod "downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c" satisfied condition "Succeeded or Failed" Apr 30 13:50:21.459: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-a2pwxc pod downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:50:21.476: INFO: Waiting for pod downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c to disappear Apr 30 13:50:21.479: INFO: Pod downwardapi-volume-c3a706f0-08af-4e21-a098-12de3275a04c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:21.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1474" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":511,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:21.537: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:21.557: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-8198 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:27.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-4802" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:27.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-8198" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":19,"skipped":551,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:27.675: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Apr 30 13:50:27.698: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Apr 30 13:50:28.170: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Apr 30 13:50:30.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:50:32.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:50:34.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 50, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:50:36.349: INFO: Waited 114.855085ms for the sample-apiserver to be ready to handle requests. �[1mSTEP�[0m: Read Status for v1alpha1.wardle.example.com �[1mSTEP�[0m: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' �[1mSTEP�[0m: List APIServices Apr 30 13:50:36.393: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:36.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-9978" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":20,"skipped":554,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:00.948: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6482.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 30 13:50:03.013: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.017: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.020: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.024: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.028: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.036: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.040: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.043: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:03.043: INFO: Lookups using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local] Apr 30 13:50:08.047: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.050: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.053: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.057: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.059: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.062: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.064: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.066: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:08.066: INFO: Lookups using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local] Apr 30 13:50:13.049: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.052: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.055: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.058: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.060: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.062: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.064: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.066: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:13.066: INFO: Lookups using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local] Apr 30 13:50:18.048: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.052: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.054: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.058: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.061: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.066: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.070: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.072: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:18.072: INFO: Lookups using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local] Apr 30 13:50:23.047: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.049: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.053: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.056: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.058: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.061: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.063: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.066: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:23.066: INFO: Lookups using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local] Apr 30 13:50:28.048: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.053: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.057: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.062: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.065: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.068: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.071: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.073: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:28.073: INFO: Lookups using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local] Apr 30 13:50:33.060: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.063: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.066: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.069: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.072: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.075: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.077: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.079: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local from pod dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb: the server could not find the requested resource (get pods dns-test-afecfb89-30b8-4445-89a2-17780d8393bb) Apr 30 13:50:33.079: INFO: Lookups using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6482.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6482.svc.cluster.local jessie_udp@dns-test-service-2.dns-6482.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6482.svc.cluster.local] Apr 30 13:50:38.070: INFO: DNS probes using dns-6482/dns-test-afecfb89-30b8-4445-89a2-17780d8393bb succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:38.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6482" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":12,"skipped":166,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:36.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Apr 30 13:50:36.942: INFO: namespace kubectl-5734 Apr 30 13:50:36.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5734 create -f -' Apr 30 13:50:37.823: INFO: stderr: "" Apr 30 13:50:37.824: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 30 13:50:38.827: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:38.827: INFO: Found 0 / 1 Apr 30 13:50:39.833: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:39.833: INFO: Found 0 / 1 Apr 30 13:50:40.829: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:40.829: INFO: Found 0 / 1 Apr 30 13:50:41.834: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:41.834: INFO: Found 0 / 1 Apr 30 13:50:42.835: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:42.835: INFO: Found 0 / 1 Apr 30 13:50:43.883: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:43.883: INFO: Found 0 / 1 Apr 30 13:50:44.835: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:44.835: INFO: Found 0 / 1 Apr 30 13:50:45.828: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:45.828: INFO: Found 0 / 1 Apr 30 13:50:46.832: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:46.832: INFO: Found 0 / 1 Apr 30 13:50:47.827: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:47.827: INFO: Found 1 / 1 Apr 30 13:50:47.827: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 30 13:50:47.830: INFO: Selector matched 1 pods for map[app:agnhost] Apr 30 13:50:47.830: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 30 13:50:47.830: INFO: wait on agnhost-primary startup in kubectl-5734 Apr 30 13:50:47.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5734 logs agnhost-primary-zs6r2 agnhost-primary' Apr 30 13:50:47.917: INFO: stderr: "" Apr 30 13:50:47.917: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Apr 30 13:50:47.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5734 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Apr 30 13:50:48.004: INFO: stderr: "" Apr 30 13:50:48.004: INFO: stdout: "service/rm2 exposed\n" Apr 30 13:50:48.017: INFO: Service rm2 in namespace kubectl-5734 found. �[1mSTEP�[0m: exposing service Apr 30 13:50:50.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5734 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Apr 30 13:50:50.108: INFO: stderr: "" Apr 30 13:50:50.108: INFO: stdout: "service/rm3 exposed\n" Apr 30 13:50:50.114: INFO: Service rm3 in namespace kubectl-5734 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:50:52.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5734" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":21,"skipped":569,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:52.134: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:50:52.160: INFO: The status of Pod busybox-host-aliases703b532e-e2e7-4d32-8508-f2fee38853b1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:54.164: INFO: The status of Pod busybox-host-aliases703b532e-e2e7-4d32-8508-f2fee38853b1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:56.164: INFO: The status of Pod busybox-host-aliases703b532e-e2e7-4d32-8508-f2fee38853b1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:58.165: INFO: The status of Pod busybox-host-aliases703b532e-e2e7-4d32-8508-f2fee38853b1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:51:00.163: INFO: The status of Pod busybox-host-aliases703b532e-e2e7-4d32-8508-f2fee38853b1 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:00.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-5961" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":573,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:49:22.484: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ReplaceConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring the job is replaced with a new one �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:00.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-4804" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":17,"skipped":361,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:00.192: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:51:00.767: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 30 13:51:02.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:51:04.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 51, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:51:07.791: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:51:07.794: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-8176-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:10.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9252" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9252-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":23,"skipped":576,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:50:38.165: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Apr 30 13:51:18.276: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-i77nai-control-plane-r7q6n is Running (Ready = true) Apr 30 13:51:18.417: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 30 13:51:18.417: INFO: Deleting pod "simpletest.rc-226bb" in namespace "gc-2185" Apr 30 13:51:18.428: INFO: Deleting pod "simpletest.rc-2ftkj" in namespace "gc-2185" Apr 30 13:51:18.436: INFO: Deleting pod "simpletest.rc-2fxfh" in namespace "gc-2185" Apr 30 13:51:18.447: INFO: Deleting pod "simpletest.rc-2gr24" in namespace "gc-2185" Apr 30 13:51:18.460: INFO: Deleting pod "simpletest.rc-2k8pr" in namespace "gc-2185" Apr 30 13:51:18.476: INFO: Deleting pod "simpletest.rc-2rbcp" in namespace "gc-2185" Apr 30 13:51:18.489: INFO: Deleting pod "simpletest.rc-2xlwx" in namespace "gc-2185" Apr 30 13:51:18.499: INFO: Deleting pod "simpletest.rc-47kn4" in namespace "gc-2185" Apr 30 13:51:18.511: INFO: Deleting pod "simpletest.rc-47t2d" in namespace "gc-2185" Apr 30 13:51:18.533: INFO: Deleting pod "simpletest.rc-4q4n2" in namespace "gc-2185" Apr 30 13:51:18.548: INFO: Deleting pod "simpletest.rc-566d9" in namespace "gc-2185" Apr 30 13:51:18.571: INFO: Deleting pod "simpletest.rc-567jm" in namespace "gc-2185" Apr 30 13:51:18.601: INFO: Deleting pod "simpletest.rc-57pfq" in namespace "gc-2185" Apr 30 13:51:18.622: INFO: Deleting pod "simpletest.rc-5dxdl" in namespace "gc-2185" Apr 30 13:51:18.637: INFO: Deleting pod "simpletest.rc-5shrw" in namespace "gc-2185" Apr 30 13:51:18.656: INFO: Deleting pod "simpletest.rc-5smng" in namespace "gc-2185" Apr 30 13:51:18.688: INFO: Deleting pod "simpletest.rc-5wrr4" in namespace "gc-2185" Apr 30 13:51:18.713: INFO: Deleting pod "simpletest.rc-67dqm" in namespace "gc-2185" Apr 30 13:51:18.748: INFO: Deleting pod "simpletest.rc-6kmks" in namespace "gc-2185" Apr 30 13:51:18.775: INFO: Deleting pod "simpletest.rc-7h2sf" in namespace "gc-2185" Apr 30 13:51:18.802: INFO: Deleting pod "simpletest.rc-7hd4h" in namespace "gc-2185" Apr 30 13:51:18.844: INFO: Deleting pod "simpletest.rc-84hq2" in namespace "gc-2185" Apr 30 13:51:18.881: INFO: Deleting pod "simpletest.rc-884dj" in namespace "gc-2185" Apr 30 13:51:18.963: INFO: Deleting pod "simpletest.rc-8d94j" in namespace "gc-2185" Apr 30 13:51:19.026: INFO: Deleting pod "simpletest.rc-8nsxg" in namespace "gc-2185" Apr 30 13:51:19.087: INFO: Deleting pod "simpletest.rc-8rp2x" in namespace "gc-2185" Apr 30 13:51:19.118: INFO: Deleting pod "simpletest.rc-9mftv" in namespace "gc-2185" Apr 30 13:51:19.136: INFO: Deleting pod "simpletest.rc-9zkjs" in namespace "gc-2185" Apr 30 13:51:19.170: INFO: Deleting pod "simpletest.rc-b2r5m" in namespace "gc-2185" Apr 30 13:51:19.231: INFO: Deleting pod "simpletest.rc-b6hlk" in namespace "gc-2185" Apr 30 13:51:19.254: INFO: Deleting pod "simpletest.rc-bbbp5" in namespace "gc-2185" Apr 30 13:51:19.306: INFO: Deleting pod "simpletest.rc-bf74c" in namespace "gc-2185" Apr 30 13:51:19.340: INFO: Deleting pod "simpletest.rc-bjxnd" in namespace "gc-2185" Apr 30 13:51:19.384: INFO: Deleting pod "simpletest.rc-brftf" in namespace "gc-2185" Apr 30 13:51:19.434: INFO: Deleting pod "simpletest.rc-chxkg" in namespace "gc-2185" Apr 30 13:51:19.458: INFO: Deleting pod "simpletest.rc-cs2vw" in namespace "gc-2185" Apr 30 13:51:19.495: INFO: Deleting pod "simpletest.rc-fb9cm" in namespace "gc-2185" Apr 30 13:51:19.541: INFO: Deleting pod "simpletest.rc-flw96" in namespace "gc-2185" Apr 30 13:51:19.605: INFO: Deleting pod "simpletest.rc-gwvxj" in namespace "gc-2185" Apr 30 13:51:19.703: INFO: Deleting pod "simpletest.rc-hjh4q" in namespace "gc-2185" Apr 30 13:51:19.765: INFO: Deleting pod "simpletest.rc-hjrl7" in namespace "gc-2185" Apr 30 13:51:19.793: INFO: Deleting pod "simpletest.rc-hn7lf" in namespace "gc-2185" Apr 30 13:51:19.834: INFO: Deleting pod "simpletest.rc-hs75p" in namespace "gc-2185" Apr 30 13:51:19.872: INFO: Deleting pod "simpletest.rc-hszd9" in namespace "gc-2185" Apr 30 13:51:19.911: INFO: Deleting pod "simpletest.rc-hz8xm" in namespace "gc-2185" Apr 30 13:51:19.977: INFO: Deleting pod "simpletest.rc-j2sl4" in namespace "gc-2185" Apr 30 13:51:20.021: INFO: Deleting pod "simpletest.rc-khwnl" in namespace "gc-2185" Apr 30 13:51:20.076: INFO: Deleting pod "simpletest.rc-kkv4p" in namespace "gc-2185" Apr 30 13:51:20.123: INFO: Deleting pod "simpletest.rc-kl5rh" in namespace "gc-2185" Apr 30 13:51:20.293: INFO: Deleting pod "simpletest.rc-ktk2c" in namespace "gc-2185" Apr 30 13:51:20.315: INFO: Deleting pod "simpletest.rc-kz9hj" in namespace "gc-2185" Apr 30 13:51:20.338: INFO: Deleting pod "simpletest.rc-kzn7x" in namespace "gc-2185" Apr 30 13:51:20.352: INFO: Deleting pod "simpletest.rc-l658c" in namespace "gc-2185" Apr 30 13:51:20.459: INFO: Deleting pod "simpletest.rc-l88vj" in namespace "gc-2185" Apr 30 13:51:20.562: INFO: Deleting pod "simpletest.rc-lgnxq" in namespace "gc-2185" Apr 30 13:51:20.643: INFO: Deleting pod "simpletest.rc-ltvnr" in namespace "gc-2185" Apr 30 13:51:20.699: INFO: Deleting pod "simpletest.rc-mf5qf" in namespace "gc-2185" Apr 30 13:51:20.774: INFO: Deleting pod "simpletest.rc-mkhtz" in namespace "gc-2185" Apr 30 13:51:20.859: INFO: Deleting pod "simpletest.rc-mrkcc" in namespace "gc-2185" Apr 30 13:51:20.901: INFO: Deleting pod "simpletest.rc-mv42b" in namespace "gc-2185" Apr 30 13:51:20.937: INFO: Deleting pod "simpletest.rc-n9trf" in namespace "gc-2185" Apr 30 13:51:21.054: INFO: Deleting pod "simpletest.rc-nhjs9" in namespace "gc-2185" Apr 30 13:51:21.112: INFO: Deleting pod "simpletest.rc-nr4mk" in namespace "gc-2185" Apr 30 13:51:21.124: INFO: Deleting pod "simpletest.rc-nsbxj" in namespace "gc-2185" Apr 30 13:51:21.169: INFO: Deleting pod "simpletest.rc-nsfd7" in namespace "gc-2185" Apr 30 13:51:21.212: INFO: Deleting pod "simpletest.rc-pkzjj" in namespace "gc-2185" Apr 30 13:51:21.244: INFO: Deleting pod "simpletest.rc-px7vv" in namespace "gc-2185" Apr 30 13:51:21.273: INFO: Deleting pod "simpletest.rc-q72xd" in namespace "gc-2185" Apr 30 13:51:21.293: INFO: Deleting pod "simpletest.rc-qmf5f" in namespace "gc-2185" Apr 30 13:51:21.332: INFO: Deleting pod "simpletest.rc-qp6fd" in namespace "gc-2185" Apr 30 13:51:21.394: INFO: Deleting pod "simpletest.rc-qvdvj" in namespace "gc-2185" Apr 30 13:51:21.423: INFO: Deleting pod "simpletest.rc-r4tl2" in namespace "gc-2185" Apr 30 13:51:21.456: INFO: Deleting pod "simpletest.rc-r6fq9" in namespace "gc-2185" Apr 30 13:51:21.489: INFO: Deleting pod "simpletest.rc-rfdtx" in namespace "gc-2185" Apr 30 13:51:21.561: INFO: Deleting pod "simpletest.rc-rgqxb" in namespace "gc-2185" Apr 30 13:51:21.579: INFO: Deleting pod "simpletest.rc-rlt2z" in namespace "gc-2185" Apr 30 13:51:21.606: INFO: Deleting pod "simpletest.rc-rpcwc" in namespace "gc-2185" Apr 30 13:51:21.651: INFO: Deleting pod "simpletest.rc-rxhx7" in namespace "gc-2185" Apr 30 13:51:21.684: INFO: Deleting pod "simpletest.rc-s6vd8" in namespace "gc-2185" Apr 30 13:51:21.715: INFO: Deleting pod "simpletest.rc-sb4fp" in namespace "gc-2185" Apr 30 13:51:21.746: INFO: Deleting pod "simpletest.rc-shlww" in namespace "gc-2185" Apr 30 13:51:21.768: INFO: Deleting pod "simpletest.rc-sn6gp" in namespace "gc-2185" Apr 30 13:51:21.793: INFO: Deleting pod "simpletest.rc-tdrhw" in namespace "gc-2185" Apr 30 13:51:21.823: INFO: Deleting pod "simpletest.rc-th6v5" in namespace "gc-2185" Apr 30 13:51:21.859: INFO: Deleting pod "simpletest.rc-tppct" in namespace "gc-2185" Apr 30 13:51:21.897: INFO: Deleting pod "simpletest.rc-tqhsn" in namespace "gc-2185" Apr 30 13:51:21.921: INFO: Deleting pod "simpletest.rc-v4l2l" in namespace "gc-2185" Apr 30 13:51:21.956: INFO: Deleting pod "simpletest.rc-v7cgn" in namespace "gc-2185" Apr 30 13:51:21.992: INFO: Deleting pod "simpletest.rc-v8g86" in namespace "gc-2185" Apr 30 13:51:22.062: INFO: Deleting pod "simpletest.rc-v8zbq" in namespace "gc-2185" Apr 30 13:51:22.077: INFO: Deleting pod "simpletest.rc-vb59q" in namespace "gc-2185" Apr 30 13:51:22.137: INFO: Deleting pod "simpletest.rc-vbhsq" in namespace "gc-2185" Apr 30 13:51:22.172: INFO: Deleting pod "simpletest.rc-vjbqx" in namespace "gc-2185" Apr 30 13:51:22.208: INFO: Deleting pod "simpletest.rc-vtr4n" in namespace "gc-2185" Apr 30 13:51:22.247: INFO: Deleting pod "simpletest.rc-w27zf" in namespace "gc-2185" Apr 30 13:51:22.268: INFO: Deleting pod "simpletest.rc-wgd4h" in namespace "gc-2185" Apr 30 13:51:22.286: INFO: Deleting pod "simpletest.rc-wv5km" in namespace "gc-2185" Apr 30 13:51:22.305: INFO: Deleting pod "simpletest.rc-xd9rr" in namespace "gc-2185" Apr 30 13:51:22.333: INFO: Deleting pod "simpletest.rc-zhbt4" in namespace "gc-2185" Apr 30 13:51:22.342: INFO: Deleting pod "simpletest.rc-zhfqp" in namespace "gc-2185" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:22.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-2185" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":13,"skipped":201,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:22.409: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with the kernel.shm_rmid_forced sysctl �[1mSTEP�[0m: Watching for error events or started pod �[1mSTEP�[0m: Waiting for pod completion �[1mSTEP�[0m: Checking that the pod succeeded �[1mSTEP�[0m: Getting logs from the pod �[1mSTEP�[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:28.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-6407" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:11.089: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Apr 30 13:51:11.112: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:31.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9782" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":24,"skipped":650,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":14,"skipped":208,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:28.508: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:51:29.010: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:51:32.037: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:51:32.041: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: Create a v2 custom resource �[1mSTEP�[0m: List CRs in v1 �[1mSTEP�[0m: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:35.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-4036" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":15,"skipped":208,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:35.294: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:51:35.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975" in namespace "downward-api-2464" to be "Succeeded or Failed" Apr 30 13:51:35.408: INFO: Pod "downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975": Phase="Pending", Reason="", readiness=false. Elapsed: 11.54783ms Apr 30 13:51:37.411: INFO: Pod "downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015345224s Apr 30 13:51:39.415: INFO: Pod "downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01882631s �[1mSTEP�[0m: Saw pod success Apr 30 13:51:39.415: INFO: Pod "downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975" satisfied condition "Succeeded or Failed" Apr 30 13:51:39.417: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-a2pwxc pod downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:51:39.430: INFO: Waiting for pod downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975 to disappear Apr 30 13:51:39.434: INFO: Pod downwardapi-volume-894cdb25-9d3d-4613-abfc-e18064df3975 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:39.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2464" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":211,"failed":0} [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:39.443: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-be79ba38-7ffe-4fc4-9239-1be5c93b1dc5 �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-3b30380e-cf95-4f89-928a-ae4ca0b0b22e �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Apr 30 13:51:39.475: INFO: Waiting up to 5m0s for pod "projected-volume-103f1976-9459-4354-8409-1124a17e257b" in namespace "projected-7463" to be "Succeeded or Failed" Apr 30 13:51:39.477: INFO: Pod "projected-volume-103f1976-9459-4354-8409-1124a17e257b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2461ms Apr 30 13:51:41.481: INFO: Pod "projected-volume-103f1976-9459-4354-8409-1124a17e257b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006123546s Apr 30 13:51:43.484: INFO: Pod "projected-volume-103f1976-9459-4354-8409-1124a17e257b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009236509s �[1mSTEP�[0m: Saw pod success Apr 30 13:51:43.484: INFO: Pod "projected-volume-103f1976-9459-4354-8409-1124a17e257b" satisfied condition "Succeeded or Failed" Apr 30 13:51:43.486: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod projected-volume-103f1976-9459-4354-8409-1124a17e257b container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:51:43.501: INFO: Waiting for pod projected-volume-103f1976-9459-4354-8409-1124a17e257b to disappear Apr 30 13:51:43.504: INFO: Pod projected-volume-103f1976-9459-4354-8409-1124a17e257b no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:43.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7463" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":211,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:31.458: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service endpoint-test2 in namespace services-5878 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-5878 to expose endpoints map[] Apr 30 13:51:31.489: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Apr 30 13:51:32.496: INFO: successfully validated that service endpoint-test2 in namespace services-5878 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-5878 Apr 30 13:51:32.504: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:51:34.508: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-5878 to expose endpoints map[pod1:[80]] Apr 30 13:51:34.520: INFO: successfully validated that service endpoint-test2 in namespace services-5878 exposes endpoints map[pod1:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 Apr 30 13:51:34.520: INFO: Creating new exec pod Apr 30 13:51:37.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5878 exec execpodk99dh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 30 13:51:37.704: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 30 13:51:37.704: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:51:37.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5878 exec execpodk99dh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.128.250.163 80' Apr 30 13:51:37.856: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.128.250.163 80\nConnection to 10.128.250.163 80 port [tcp/http] succeeded!\n" Apr 30 13:51:37.856: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Creating pod pod2 in namespace services-5878 Apr 30 13:51:37.864: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:51:39.869: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-5878 to expose endpoints map[pod1:[80] pod2:[80]] Apr 30 13:51:39.882: INFO: successfully validated that service endpoint-test2 in namespace services-5878 exposes endpoints map[pod1:[80] pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 and pod2 Apr 30 13:51:40.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5878 exec execpodk99dh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 30 13:51:41.037: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 30 13:51:41.037: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:51:41.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5878 exec execpodk99dh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.128.250.163 80' Apr 30 13:51:41.185: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.128.250.163 80\nConnection to 10.128.250.163 80 port [tcp/http] succeeded!\n" Apr 30 13:51:41.185: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-5878 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-5878 to expose endpoints map[pod2:[80]] Apr 30 13:51:42.223: INFO: successfully validated that service endpoint-test2 in namespace services-5878 exposes endpoints map[pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod2 Apr 30 13:51:43.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5878 exec execpodk99dh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 30 13:51:43.386: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 30 13:51:43.386: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:51:43.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5878 exec execpodk99dh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.128.250.163 80' Apr 30 13:51:43.549: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.128.250.163 80\nConnection to 10.128.250.163 80 port [tcp/http] succeeded!\n" Apr 30 13:51:43.549: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod2 in namespace services-5878 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-5878 to expose endpoints map[] Apr 30 13:51:43.594: INFO: successfully validated that service endpoint-test2 in namespace services-5878 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:43.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5878" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":25,"skipped":671,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:43.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Apr 30 13:51:43.753: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Apr 30 13:51:43.753: INFO: cleanMinorVersion: 23 Apr 30 13:51:43.753: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:43.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-4049" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":26,"skipped":701,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:43.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:45.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-6360" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":27,"skipped":707,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:43.534: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ReplicationController �[1mSTEP�[0m: waiting for RC to be added �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: patching ReplicationController �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: patching ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: fetching ReplicationController status �[1mSTEP�[0m: patching ReplicationController scale �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for ReplicationController's scale to be the max amount �[1mSTEP�[0m: fetching ReplicationController; ensuring that it's patched �[1mSTEP�[0m: updating ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: listing all ReplicationControllers �[1mSTEP�[0m: checking that ReplicationController has expected values �[1mSTEP�[0m: deleting ReplicationControllers by collection �[1mSTEP�[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:46.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-8207" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":18,"skipped":228,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:46.041: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 �[1mSTEP�[0m: creating the pod Apr 30 13:51:46.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 create -f -' Apr 30 13:51:46.901: INFO: stderr: "" Apr 30 13:51:46.901: INFO: stdout: "pod/pause created\n" Apr 30 13:51:46.901: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 30 13:51:46.901: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1597" to be "running and ready" Apr 30 13:51:46.906: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151914ms Apr 30 13:51:48.909: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.007681259s Apr 30 13:51:48.909: INFO: Pod "pause" satisfied condition "running and ready" Apr 30 13:51:48.909: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: adding the label testing-label with value testing-label-value to a pod Apr 30 13:51:48.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 label pods pause testing-label=testing-label-value' Apr 30 13:51:48.991: INFO: stderr: "" Apr 30 13:51:48.991: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod has the label testing-label with the value testing-label-value Apr 30 13:51:48.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 get pod pause -L testing-label' Apr 30 13:51:49.065: INFO: stderr: "" Apr 30 13:51:49.065: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" �[1mSTEP�[0m: removing the label testing-label of a pod Apr 30 13:51:49.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 label pods pause testing-label-' Apr 30 13:51:49.139: INFO: stderr: "" Apr 30 13:51:49.139: INFO: stdout: "pod/pause unlabeled\n" �[1mSTEP�[0m: verifying the pod doesn't have the label testing-label Apr 30 13:51:49.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 get pod pause -L testing-label' Apr 30 13:51:49.207: INFO: stderr: "" Apr 30 13:51:49.207: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1339 �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:51:49.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 delete --grace-period=0 --force -f -' Apr 30 13:51:49.282: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:51:49.282: INFO: stdout: "pod \"pause\" force deleted\n" Apr 30 13:51:49.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 get rc,svc -l name=pause --no-headers' Apr 30 13:51:49.356: INFO: stderr: "No resources found in kubectl-1597 namespace.\n" Apr 30 13:51:49.357: INFO: stdout: "" Apr 30 13:51:49.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1597 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 30 13:51:49.421: INFO: stderr: "" Apr 30 13:51:49.421: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:49.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1597" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":19,"skipped":234,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:49.565: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-801efa70-e45e-448f-a4ef-4c3f28e34cbd �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 13:51:49.592: INFO: Waiting up to 5m0s for pod "pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be" in namespace "configmap-2032" to be "Succeeded or Failed" Apr 30 13:51:49.596: INFO: Pod "pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.815323ms Apr 30 13:51:51.600: INFO: Pod "pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007521637s Apr 30 13:51:53.604: INFO: Pod "pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011414646s �[1mSTEP�[0m: Saw pod success Apr 30 13:51:53.604: INFO: Pod "pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be" satisfied condition "Succeeded or Failed" Apr 30 13:51:53.606: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:51:53.621: INFO: Waiting for pod pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be to disappear Apr 30 13:51:53.623: INFO: Pod pod-configmaps-fda36a84-72e0-49db-b929-4324fe2983be no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:53.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2032" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":345,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:53.662: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment Apr 30 13:51:53.727: INFO: Creating simple deployment test-deployment-2vfpx Apr 30 13:51:53.736: INFO: deployment "test-deployment-2vfpx" doesn't have the required revision set �[1mSTEP�[0m: Getting /status Apr 30 13:51:55.751: INFO: Deployment test-deployment-2vfpx has Conditions: [{Available True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2vfpx-764bc7c4b7" has successfully progressed.}] �[1mSTEP�[0m: updating Deployment Status Apr 30 13:51:55.758: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 51, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 51, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 51, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 51, 53, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-2vfpx-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Deployment status to be updated Apr 30 13:51:55.761: INFO: Observed &Deployment event: ADDED Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2vfpx-764bc7c4b7"} Apr 30 13:51:55.761: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2vfpx-764bc7c4b7"} Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 30 13:51:55.761: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-2vfpx-764bc7c4b7" is progressing.} Apr 30 13:51:55.761: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2vfpx-764bc7c4b7" has successfully progressed.} Apr 30 13:51:55.761: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 30 13:51:55.761: INFO: Observed Deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2vfpx-764bc7c4b7" has successfully progressed.} Apr 30 13:51:55.761: INFO: Found Deployment test-deployment-2vfpx in namespace deployment-9829 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 30 13:51:55.761: INFO: Deployment test-deployment-2vfpx has an updated status �[1mSTEP�[0m: patching the Statefulset Status Apr 30 13:51:55.761: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Apr 30 13:51:55.766: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Deployment status to be patched Apr 30 13:51:55.768: INFO: Observed &Deployment event: ADDED Apr 30 13:51:55.768: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2vfpx-764bc7c4b7"} Apr 30 13:51:55.768: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.768: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2vfpx-764bc7c4b7"} Apr 30 13:51:55.768: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 30 13:51:55.768: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.768: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 30 13:51:55.768: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:53 +0000 UTC 2022-04-30 13:51:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-2vfpx-764bc7c4b7" is progressing.} Apr 30 13:51:55.769: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.769: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 30 13:51:55.769: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2vfpx-764bc7c4b7" has successfully progressed.} Apr 30 13:51:55.769: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.769: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 30 13:51:55.769: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-30 13:51:54 +0000 UTC 2022-04-30 13:51:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2vfpx-764bc7c4b7" has successfully progressed.} Apr 30 13:51:55.769: INFO: Observed deployment test-deployment-2vfpx in namespace deployment-9829 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 30 13:51:55.769: INFO: Observed &Deployment event: MODIFIED Apr 30 13:51:55.769: INFO: Found deployment test-deployment-2vfpx in namespace deployment-9829 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Apr 30 13:51:55.769: INFO: Deployment test-deployment-2vfpx has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 13:51:55.771: INFO: Deployment "test-deployment-2vfpx": &Deployment{ObjectMeta:{test-deployment-2vfpx deployment-9829 c98b4be1-ddfb-4a84-9f85-9b27744c9a69 10017 1 2022-04-30 13:51:53 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-30 13:51:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:51:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2022-04-30 13:51:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004add7e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 30 13:51:55.775: INFO: New ReplicaSet "test-deployment-2vfpx-764bc7c4b7" of Deployment "test-deployment-2vfpx": &ReplicaSet{ObjectMeta:{test-deployment-2vfpx-764bc7c4b7 deployment-9829 fa2d1a3e-91cd-4646-a35d-0c6af1879dbc 10012 1 2022-04-30 13:51:53 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-2vfpx c98b4be1-ddfb-4a84-9f85-9b27744c9a69 0xc004addb80 0xc004addb81}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:51:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c98b4be1-ddfb-4a84-9f85-9b27744c9a69\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:51:54 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004addc28 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:51:55.777: INFO: Pod "test-deployment-2vfpx-764bc7c4b7-tjtbl" is available: &Pod{ObjectMeta:{test-deployment-2vfpx-764bc7c4b7-tjtbl test-deployment-2vfpx-764bc7c4b7- deployment-9829 f3db87c0-90cc-4512-a6e2-df1e677d1b9b 10011 0 2022-04-30 13:51:53 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-2vfpx-764bc7c4b7 fa2d1a3e-91cd-4646-a35d-0c6af1879dbc 0xc004003fb0 0xc004003fb1}] [] [{kube-controller-manager Update v1 2022-04-30 13:51:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa2d1a3e-91cd-4646-a35d-0c6af1879dbc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:51:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.87\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4wlgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wlgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:51:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:51:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:51:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.87,StartTime:2022-04-30 13:51:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:51:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://23bb1c38d1598af2c6c3f0bae27e5311efd9fb54642db1c93c48205cfbd6aa86,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:51:55.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9829" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":21,"skipped":373,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:45.904: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota with best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a best-effort pod �[1mSTEP�[0m: Ensuring resource quota with best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not best effort ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a not best-effort pod �[1mSTEP�[0m: Ensuring resource quota with not best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with best effort scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:52:02.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3755" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":28,"skipped":724,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:00.657: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-acf5524f-f876-4e59-8143-1bf725aa5482 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-6e0f3c70-6fb7-4cec-a238-0e9edd3cf227 �[1mSTEP�[0m: Creating the pod Apr 30 13:51:00.701: INFO: The status of Pod pod-projected-configmaps-e93312e5-1762-4c70-a3c3-2ed9e51717ef is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:51:02.705: INFO: The status of Pod pod-projected-configmaps-e93312e5-1762-4c70-a3c3-2ed9e51717ef is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:51:04.707: INFO: The status of Pod pod-projected-configmaps-e93312e5-1762-4c70-a3c3-2ed9e51717ef is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:51:06.706: INFO: The status of Pod pod-projected-configmaps-e93312e5-1762-4c70-a3c3-2ed9e51717ef is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-acf5524f-f876-4e59-8143-1bf725aa5482 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-6e0f3c70-6fb7-4cec-a238-0e9edd3cf227 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-c1afbffa-066a-4d8d-af01-1692f82d133c �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:52:23.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5239" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":376,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:52:23.162: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:52:23.182: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 30 13:52:23.191: INFO: The status of Pod pod-logs-websocket-9b1fea75-c579-4638-877a-0d3b627c1610 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:52:25.195: INFO: The status of Pod pod-logs-websocket-9b1fea75-c579-4638-877a-0d3b627c1610 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:52:25.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7271" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":426,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:52:25.253: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-secret-vmwv �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 30 13:52:25.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vmwv" in namespace "subpath-7537" to be "Succeeded or Failed" Apr 30 13:52:25.286: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070388ms Apr 30 13:52:27.291: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 2.007066843s Apr 30 13:52:29.295: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 4.010693627s Apr 30 13:52:31.298: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 6.014021483s Apr 30 13:52:33.302: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 8.017605285s Apr 30 13:52:35.305: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 10.021265096s Apr 30 13:52:37.310: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 12.025780783s Apr 30 13:52:39.315: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 14.030520677s Apr 30 13:52:41.320: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 16.036110823s Apr 30 13:52:43.324: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 18.039452454s Apr 30 13:52:45.328: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=true. Elapsed: 20.044392039s Apr 30 13:52:47.336: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Running", Reason="", readiness=false. Elapsed: 22.051744031s Apr 30 13:52:49.341: INFO: Pod "pod-subpath-test-secret-vmwv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056439389s �[1mSTEP�[0m: Saw pod success Apr 30 13:52:49.341: INFO: Pod "pod-subpath-test-secret-vmwv" satisfied condition "Succeeded or Failed" Apr 30 13:52:49.343: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-subpath-test-secret-vmwv container test-container-subpath-secret-vmwv: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:52:49.357: INFO: Waiting for pod pod-subpath-test-secret-vmwv to disappear Apr 30 13:52:49.360: INFO: Pod pod-subpath-test-secret-vmwv no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-secret-vmwv Apr 30 13:52:49.360: INFO: Deleting pod "pod-subpath-test-secret-vmwv" in namespace "subpath-7537" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:52:49.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-7537" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":20,"skipped":457,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:51:55.797: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-6348 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Apr 30 13:51:55.834: INFO: Found 0 stateful pods, waiting for 3 Apr 30 13:52:05.839: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 30 13:52:05.839: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 30 13:52:05.839: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Apr 30 13:52:05.867: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Apr 30 13:52:15.897: INFO: Updating stateful set ss2 Apr 30 13:52:15.902: INFO: Waiting for Pod statefulset-6348/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Apr 30 13:52:25.951: INFO: Found 2 stateful pods, waiting for 3 Apr 30 13:52:35.956: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 30 13:52:35.956: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 30 13:52:35.956: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Apr 30 13:52:35.979: INFO: Updating stateful set ss2 Apr 30 13:52:35.985: INFO: Waiting for Pod statefulset-6348/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Apr 30 13:52:46.012: INFO: Updating stateful set ss2 Apr 30 13:52:46.017: INFO: Waiting for StatefulSet statefulset-6348/ss2 to complete update Apr 30 13:52:46.018: INFO: Waiting for Pod statefulset-6348/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 30 13:52:56.024: INFO: Deleting all statefulset in ns statefulset-6348 Apr 30 13:52:56.027: INFO: Scaling statefulset ss2 to 0 Apr 30 13:53:06.041: INFO: Waiting for statefulset status.replicas updated to 0 Apr 30 13:53:06.044: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:06.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6348" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":22,"skipped":381,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:06.130: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:53:08.166: INFO: Deleting pod "var-expansion-44229498-b3ea-4d56-beda-29fef40971d0" in namespace "var-expansion-1832" Apr 30 13:53:08.171: INFO: Wait up to 5m0s for pod "var-expansion-44229498-b3ea-4d56-beda-29fef40971d0" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:10.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1832" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":23,"skipped":432,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:48:04.234: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook W0430 13:48:04.255912 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 30 13:48:04.255: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 30 13:48:04.288: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:06.292: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:08.356: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:10.291: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 30 13:48:10.300: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:12.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:14.306: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:16.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:18.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:20.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:22.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:24.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:26.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:28.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:30.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:32.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:34.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:36.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:38.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:40.303: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:42.306: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:44.303: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:46.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:48.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:50.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:52.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:54.307: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:56.303: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:48:58.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:00.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:02.303: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:04.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:06.310: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:08.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:10.307: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:12.309: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:14.334: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:16.311: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:18.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:20.325: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:22.322: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:24.318: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:26.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:28.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:30.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:32.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:34.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:36.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:38.361: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:40.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:42.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:44.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:46.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:48.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:50.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:52.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:54.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:56.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:49:58.306: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:00.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:02.307: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:04.308: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:06.303: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:08.305: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:10.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:12.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:14.306: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:16.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:18.306: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:20.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:22.304: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:50:24.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:26.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:28.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:30.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:32.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:34.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:36.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:38.319: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:40.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:42.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:44.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:46.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:48.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:50.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:52.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:54.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:56.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:50:58.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:00.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:02.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:04.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:06.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:08.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:10.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:12.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:14.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:16.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:18.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:20.314: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:22.314: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:24.321: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:26.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:28.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:30.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:32.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:34.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:36.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:38.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:40.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:42.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:44.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:46.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:48.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:50.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:52.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:54.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:56.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:51:58.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:00.307: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:02.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:04.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:06.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:08.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:10.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:12.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:14.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:16.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:18.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:20.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:22.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:24.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:26.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:28.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:30.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:32.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:34.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:36.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:38.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:40.363: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:42.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:44.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:46.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:48.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:50.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:52.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:54.304: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:56.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:52:58.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:00.308: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:02.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:04.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:06.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:08.306: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:10.305: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:10.308: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Apr 30 13:53:10.308: FAIL: Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc001c115d8, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc002030000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000502820, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:10.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-6638" for this suite. �[91m�[1m• Failure [306.082 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 30 13:53:10.308: Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:10.197: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-251fffc6-b77c-48dc-9471-8e626f030483 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 13:53:10.225: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa" in namespace "projected-7124" to be "Succeeded or Failed" Apr 30 13:53:10.228: INFO: Pod "pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304402ms Apr 30 13:53:12.232: INFO: Pod "pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006311101s Apr 30 13:53:14.236: INFO: Pod "pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010635132s �[1mSTEP�[0m: Saw pod success Apr 30 13:53:14.236: INFO: Pod "pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa" satisfied condition "Succeeded or Failed" Apr 30 13:53:14.238: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:53:14.253: INFO: Waiting for pod pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa to disappear Apr 30 13:53:14.255: INFO: Pod pod-projected-secrets-ac56ab6a-78df-4a2e-8891-14c5b0a74ffa no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:14.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7124" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":440,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":0,"skipped":40,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:10.318: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 30 13:53:10.344: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:53:12.349: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 30 13:53:12.357: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:53:14.363: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 30 13:53:14.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 30 13:53:14.377: INFO: Pod pod-with-poststart-exec-hook still exists Apr 30 13:53:16.377: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 30 13:53:16.380: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:16.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5868" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":40,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:14.266: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-760150a6-dc1b-41e9-97d5-8f465c646149 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 13:53:14.294: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb" in namespace "projected-8520" to be "Succeeded or Failed" Apr 30 13:53:14.297: INFO: Pod "pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.590304ms Apr 30 13:53:16.300: INFO: Pod "pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005935142s Apr 30 13:53:18.304: INFO: Pod "pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010131688s �[1mSTEP�[0m: Saw pod success Apr 30 13:53:18.304: INFO: Pod "pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb" satisfied condition "Succeeded or Failed" Apr 30 13:53:18.308: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:53:18.318: INFO: Waiting for pod pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb to disappear Apr 30 13:53:18.321: INFO: Pod pod-projected-configmaps-81fc56e1-aa28-458a-b9a2-fe9c6aea16fb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:18.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8520" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":442,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:18.351: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslicemirroring �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: mirroring a new custom Endpoint Apr 30 13:53:18.391: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 �[1mSTEP�[0m: mirroring an update to a custom Endpoint Apr 30 13:53:20.401: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 �[1mSTEP�[0m: mirroring deletion of a custom Endpoint [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:22.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslicemirroring-5160" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":26,"skipped":455,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:16.391: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:53:16.415: INFO: Creating deployment "webserver-deployment" Apr 30 13:53:16.419: INFO: Waiting for observed generation 1 Apr 30 13:53:18.427: INFO: Waiting for all required pods to come up Apr 30 13:53:18.433: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Apr 30 13:53:20.454: INFO: Waiting for deployment "webserver-deployment" to complete Apr 30 13:53:20.461: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 30 13:53:20.470: INFO: Updating deployment webserver-deployment Apr 30 13:53:20.470: INFO: Waiting for observed generation 2 Apr 30 13:53:22.478: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 30 13:53:22.481: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 30 13:53:22.486: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 30 13:53:22.500: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 30 13:53:22.500: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 30 13:53:22.502: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 30 13:53:22.513: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 30 13:53:22.513: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 30 13:53:22.533: INFO: Updating deployment webserver-deployment Apr 30 13:53:22.533: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 30 13:53:22.545: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 30 13:53:22.553: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 13:53:22.574: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9392 dcbe1a27-a8fc-42ca-a783-fad2580015c3 11033 3 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000cc57e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2022-04-30 13:53:20 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-30 13:53:22 +0000 UTC,LastTransitionTime:2022-04-30 13:53:22 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 30 13:53:22.585: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-9392 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 11028 3 2022-04-30 13:53:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment dcbe1a27-a8fc-42ca-a783-fad2580015c3 0xc000878d27 0xc000878d28}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcbe1a27-a8fc-42ca-a783-fad2580015c3\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000878dc8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:53:22.585: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 30 13:53:22.585: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-9392 693713cd-447b-4a80-9d1e-a9823aa00cc6 11027 3 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment dcbe1a27-a8fc-42ca-a783-fad2580015c3 0xc000878e27 0xc000878e28}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcbe1a27-a8fc-42ca-a783-fad2580015c3\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:53:17 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000878eb8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:53:22.603: INFO: Pod "webserver-deployment-566f96c878-96c7b" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-96c7b webserver-deployment-566f96c878- deployment-9392 258ea90d-45a0-4420-bfa6-1ab103bc361d 11039 0 2022-04-30 13:53:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc000cc5bd7 0xc000cc5bd8}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z9qhv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9qhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.603: INFO: Pod "webserver-deployment-566f96c878-dk6n6" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-dk6n6 webserver-deployment-566f96c878- deployment-9392 07b0d686-20c3-4ee1-a583-6dda1b633a60 10998 0 2022-04-30 13:53:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc000cc5d27 0xc000cc5d28}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6gxt6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6gxt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-a2pwxc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.3.66,StartTime:2022-04-30 13:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.604: INFO: Pod "webserver-deployment-566f96c878-h5ftc" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-h5ftc webserver-deployment-566f96c878- deployment-9392 dc4f90d9-0a41-4e67-86b3-d77aedfb59dc 11007 0 2022-04-30 13:53:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc000cc5f60 0xc000cc5f61}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vkp7c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vkp7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.65,StartTime:2022-04-30 13:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.604: INFO: Pod "webserver-deployment-566f96c878-ng6xl" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-ng6xl webserver-deployment-566f96c878- deployment-9392 0eed4ecd-1920-4958-aa6a-d92aed05c618 11004 0 2022-04-30 13:53:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc00434f7f0 0xc00434f7f1}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-47dd6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47dd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.79,StartTime:2022-04-30 13:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.604: INFO: Pod "webserver-deployment-566f96c878-pmwnm" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-pmwnm webserver-deployment-566f96c878- deployment-9392 46d9ad1e-cc83-4709-9aaf-e6fc2187e9fb 11001 0 2022-04-30 13:53:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc00434f9f0 0xc00434f9f1}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nk7c6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nk7c6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.80,StartTime:2022-04-30 13:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.605: INFO: Pod "webserver-deployment-566f96c878-sjhtx" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-sjhtx webserver-deployment-566f96c878- deployment-9392 359ed975-7058-48e5-9462-18106a32c9a6 11047 0 2022-04-30 13:53:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc00434fbf0 0xc00434fbf1}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q4jdj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4jdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.605: INFO: Pod "webserver-deployment-566f96c878-vkgq5" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-vkgq5 webserver-deployment-566f96c878- deployment-9392 5240c713-0698-4016-96b9-d287e6ecf293 10995 0 2022-04-30 13:53:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc00434fd37 0xc00434fd38}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k5knz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5knz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.100,StartTime:2022-04-30 13:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.605: INFO: Pod "webserver-deployment-566f96c878-wmb4d" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-wmb4d webserver-deployment-566f96c878- deployment-9392 98d19a8e-ac45-4921-924f-42387af4667a 11036 0 2022-04-30 13:53:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 0ea9a1ce-98a5-4031-b4fa-a394c35bb21f 0xc00434ff40 0xc00434ff41}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ea9a1ce-98a5-4031-b4fa-a394c35bb21f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m79bl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m79bl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.605: INFO: Pod "webserver-deployment-5d9fdcc779-4d4x9" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4d4x9 webserver-deployment-5d9fdcc779- deployment-9392 e4662341-5eec-44c2-b643-64ab9cd91c57 10872 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b00a0 0xc0020b00a1}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q6kfh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6kfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.64,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://40129557947a99a98056edc70642b7e2bc707f038d8765c51733f5b9180a9171,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.606: INFO: Pod "webserver-deployment-5d9fdcc779-6848g" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6848g webserver-deployment-5d9fdcc779- deployment-9392 7b8ad60b-a596-468c-8835-5cd75394ed8c 10890 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0270 0xc0020b0271}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x76q7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x76q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.78,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://cfbd91cc9ca6cfd382d0c90292062fd75bc6ebccc2a910d1ac98b55017f19590,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.606: INFO: Pod "webserver-deployment-5d9fdcc779-7vzpq" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-7vzpq webserver-deployment-5d9fdcc779- deployment-9392 bd1d3b9b-940e-4554-816b-ed9a5440d5a7 10854 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0440 0xc0020b0441}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.99\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-48qwm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-48qwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.99,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://970e853657098d2eba8cd405836d2f8cf6da91ce5aadb67e0b926d7cc8dbf4ac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.606: INFO: Pod "webserver-deployment-5d9fdcc779-ftjrr" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-ftjrr webserver-deployment-5d9fdcc779- deployment-9392 bd16f648-f9fc-43c0-a4a9-854bc32c25cc 11043 0 2022-04-30 13:53:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0610 0xc0020b0611}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-whxn2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whxn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.607: INFO: Pod "webserver-deployment-5d9fdcc779-gkwb9" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-gkwb9 webserver-deployment-5d9fdcc779- deployment-9392 155807b6-3292-410a-92ef-a4aaa870586e 11044 0 2022-04-30 13:53:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0747 0xc0020b0748}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zj8mn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zj8mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.607: INFO: Pod "webserver-deployment-5d9fdcc779-j26zx" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-j26zx webserver-deployment-5d9fdcc779- deployment-9392 684fd2f3-5c1d-40be-a218-6816f07b1641 10858 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0887 0xc0020b0888}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n9kwq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9kwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.98,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://c650f0cbc1429735130ef0b7381e50e6f11fc126130d44f7c400bc85ed794964,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.615: INFO: Pod "webserver-deployment-5d9fdcc779-j8hqj" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-j8hqj webserver-deployment-5d9fdcc779- deployment-9392 a9c536ae-1e8c-4cbb-bbf4-cdd9eab67943 10874 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0a60 0xc0020b0a61}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6hb8g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6hb8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.63,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://d27a171540a7936b3da39d037a3af5aab35cba5a4fea083ecdd4a7713b47292b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.616: INFO: Pod "webserver-deployment-5d9fdcc779-jrtmw" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-jrtmw webserver-deployment-5d9fdcc779- deployment-9392 64357b04-4ba3-48bb-ad6d-10d596af0a3b 11034 0 2022-04-30 13:53:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0c30 0xc0020b0c31}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xnfbt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnfbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.617: INFO: Pod "webserver-deployment-5d9fdcc779-sh8kv" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-sh8kv webserver-deployment-5d9fdcc779- deployment-9392 48111f74-5375-46b0-8a4e-610fd36140f3 10868 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0d80 0xc0020b0d81}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xg8z6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xg8z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-a2pwxc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.3.65,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://600b38087bb1e2ace57713da8bbad1099f975940a01dcf8424966c8aeced7497,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.617: INFO: Pod "webserver-deployment-5d9fdcc779-vslms" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-vslms webserver-deployment-5d9fdcc779- deployment-9392 48b4524d-df11-4224-8d20-593031905282 10863 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b0f50 0xc0020b0f51}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qr9m2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qr9m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-a2pwxc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.3.64,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9ea3c8795b067a0260ab34921629e94633d5d6c026bdecd06cc2a787b9159579,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:22.617: INFO: Pod "webserver-deployment-5d9fdcc779-vw7vf" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-vw7vf webserver-deployment-5d9fdcc779- deployment-9392 be93dfa8-13d3-4ecf-ae5c-816dbb1c966d 10852 0 2022-04-30 13:53:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 693713cd-447b-4a80-9d1e-a9823aa00cc6 0xc0020b1120 0xc0020b1121}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"693713cd-447b-4a80-9d1e-a9823aa00cc6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.97\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-98tdb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-98tdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.97,StartTime:2022-04-30 13:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://dbc9125a72897bada64c304236e35923ad9af6b8756b0fe135525ca971c2cee1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:22.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9392" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":2,"skipped":41,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:22.730: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Apr 30 13:53:22.786: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:28.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4695" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":70,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:22.476: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service clusterip-service with the type=ClusterIP in namespace services-7356 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-7356 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-7356 I0430 13:53:22.534400 21 runners.go:193] Created replication controller with name: externalsvc, namespace: services-7356, replica count: 2 I0430 13:53:25.585712 21 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0430 13:53:28.586410 21 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the ClusterIP service to type=ExternalName Apr 30 13:53:28.606: INFO: Creating new exec pod Apr 30 13:53:32.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7356 exec execpodxwvcz -- /bin/sh -x -c nslookup clusterip-service.services-7356.svc.cluster.local' Apr 30 13:53:32.840: INFO: stderr: "+ nslookup clusterip-service.services-7356.svc.cluster.local\n" Apr 30 13:53:32.840: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nclusterip-service.services-7356.svc.cluster.local\tcanonical name = externalsvc.services-7356.svc.cluster.local.\nName:\texternalsvc.services-7356.svc.cluster.local\nAddress: 10.134.234.0\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-7356, will wait for the garbage collector to delete the pods Apr 30 13:53:32.899: INFO: Deleting ReplicationController externalsvc took: 4.920588ms Apr 30 13:53:32.999: INFO: Terminating ReplicationController externalsvc pods took: 100.135306ms Apr 30 13:53:34.413: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:34.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7356" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":27,"skipped":478,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:28.433: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pods Apr 30 13:53:28.487: INFO: created test-pod-1 Apr 30 13:53:30.504: INFO: running and ready test-pod-1 Apr 30 13:53:30.508: INFO: created test-pod-2 Apr 30 13:53:34.517: INFO: running and ready test-pod-2 Apr 30 13:53:34.522: INFO: created test-pod-3 Apr 30 13:53:36.529: INFO: running and ready test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be located �[1mSTEP�[0m: waiting for all pods to be deleted Apr 30 13:53:36.553: INFO: Pod quantity 3 is different from expected quantity 0 Apr 30 13:53:37.557: INFO: Pod quantity 1 is different from expected quantity 0 Apr 30 13:53:38.557: INFO: Pod quantity 1 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:39.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8672" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":4,"skipped":93,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:34.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:53:34.463: INFO: Creating ReplicaSet my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76 Apr 30 13:53:34.471: INFO: Pod name my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76: Found 0 pods out of 1 Apr 30 13:53:39.474: INFO: Pod name my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76: Found 1 pods out of 1 Apr 30 13:53:39.475: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76" is running Apr 30 13:53:39.480: INFO: Pod "my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76-g2ngq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:53:34 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:53:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:53:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:53:34 +0000 UTC Reason: Message:}]) Apr 30 13:53:39.480: INFO: Trying to dial the pod Apr 30 13:53:44.489: INFO: Controller my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76: Got expected result from replica 1 [my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76-g2ngq]: "my-hostname-basic-08c13e98-aa23-4ca9-bbd1-6c85f8d21c76-g2ngq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:44.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-7541" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":28,"skipped":483,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:44.559: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:53:44.582: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:47.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-9930" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":29,"skipped":535,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:52:49.425: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:49.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-2081" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":497,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:49.493: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:53:49.511: INFO: Creating simple deployment test-new-deployment Apr 30 13:53:49.521: INFO: deployment "test-new-deployment" doesn't have the required revision set �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the deployment Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 13:53:51.562: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-2558 3d3111f1-6230-40c5-9f5e-d6e3a528515c 11614 3 2022-04-30 13:53:49 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2022-04-30 13:53:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000b1c738 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2022-04-30 13:53:50 +0000 UTC,LastTransitionTime:2022-04-30 13:53:49 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-30 13:53:51 +0000 UTC,LastTransitionTime:2022-04-30 13:53:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 30 13:53:51.567: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-2558 ddb56901-1759-4366-b044-7bed0ad3e5d4 11618 2 2022-04-30 13:53:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 3d3111f1-6230-40c5-9f5e-d6e3a528515c 0xc000b1cb67 0xc000b1cb68}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:53:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d3111f1-6230-40c5-9f5e-d6e3a528515c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:53:50 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000b1cbf8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:53:51.571: INFO: Pod "test-new-deployment-5d9fdcc779-dklgv" is available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-dklgv test-new-deployment-5d9fdcc779- deployment-2558 5df185c1-9f12-4af7-8644-e563391ea032 11604 0 2022-04-30 13:53:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 ddb56901-1759-4366-b044-7bed0ad3e5d4 0xc003b2b857 0xc003b2b858}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ddb56901-1759-4366-b044-7bed0ad3e5d4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:53:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.86\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r54fz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r54fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.86,StartTime:2022-04-30 13:53:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:53:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://01daec1885e80397789da10f2089f473331666d4fa4cb3dffaeca67d4b30b28b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 30 13:53:51.571: INFO: Pod "test-new-deployment-5d9fdcc779-wjppw" is not available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-wjppw test-new-deployment-5d9fdcc779- deployment-2558 8b6d0427-4105-4e93-838a-eb3e2ff17070 11616 0 2022-04-30 13:53:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 ddb56901-1759-4366-b044-7bed0ad3e5d4 0xc003b2ba40 0xc003b2ba41}] [] [{kube-controller-manager Update v1 2022-04-30 13:53:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ddb56901-1759-4366-b044-7bed0ad3e5d4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h52ps,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h52ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:53:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:51.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2558" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":22,"skipped":518,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:47.770: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:53:47.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b" in namespace "downward-api-9864" to be "Succeeded or Failed" Apr 30 13:53:47.808: INFO: Pod "downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.545681ms Apr 30 13:53:49.812: INFO: Pod "downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006255933s Apr 30 13:53:51.817: INFO: Pod "downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010852087s �[1mSTEP�[0m: Saw pod success Apr 30 13:53:51.817: INFO: Pod "downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b" satisfied condition "Succeeded or Failed" Apr 30 13:53:51.819: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:53:51.832: INFO: Waiting for pod downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b to disappear Apr 30 13:53:51.835: INFO: Pod downwardapi-volume-e8daa35d-4900-48b6-8f27-344b21799f7b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:51.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9864" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":564,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:51.601: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:53:51.658: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a99828f4-6602-47e1-a2a4-81ab92622969", Controller:(*bool)(0xc0044871b6), BlockOwnerDeletion:(*bool)(0xc0044871b7)}} Apr 30 13:53:51.667: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"42b08000-2ea5-4b67-9043-111578bae161", Controller:(*bool)(0xc004487476), BlockOwnerDeletion:(*bool)(0xc004487477)}} Apr 30 13:53:51.674: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0ad50c66-fa8d-485a-b988-e07995ce4261", Controller:(*bool)(0xc004487726), BlockOwnerDeletion:(*bool)(0xc004487727)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:53:56.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-6158" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":23,"skipped":529,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:39.582: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-qjjj �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 30 13:53:39.615: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qjjj" in namespace "subpath-4986" to be "Succeeded or Failed" Apr 30 13:53:39.617: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28391ms Apr 30 13:53:41.621: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 2.006256225s Apr 30 13:53:43.627: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 4.011557209s Apr 30 13:53:45.631: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 6.015687949s Apr 30 13:53:47.635: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 8.019765606s Apr 30 13:53:49.638: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 10.022860517s Apr 30 13:53:51.646: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 12.030643484s Apr 30 13:53:53.650: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 14.034452788s Apr 30 13:53:55.654: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 16.039186886s Apr 30 13:53:57.659: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 18.043348021s Apr 30 13:53:59.662: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Running", Reason="", readiness=true. Elapsed: 20.046952213s Apr 30 13:54:01.667: INFO: Pod "pod-subpath-test-configmap-qjjj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051461085s �[1mSTEP�[0m: Saw pod success Apr 30 13:54:01.667: INFO: Pod "pod-subpath-test-configmap-qjjj" satisfied condition "Succeeded or Failed" Apr 30 13:54:01.670: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-subpath-test-configmap-qjjj container test-container-subpath-configmap-qjjj: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:54:01.689: INFO: Waiting for pod pod-subpath-test-configmap-qjjj to disappear Apr 30 13:54:01.691: INFO: Pod pod-subpath-test-configmap-qjjj no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-qjjj Apr 30 13:54:01.691: INFO: Deleting pod "pod-subpath-test-configmap-qjjj" in namespace "subpath-4986" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:01.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4986" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":5,"skipped":103,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:52:02.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: Ensuring more than one job is running at a time �[1mSTEP�[0m: Ensuring at least two running jobs exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:02.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-1923" for this suite. �[32m• [SLOW TEST:120.045 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should schedule multiple jobs concurrently [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":29,"skipped":746,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:01.707: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1539 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 30 13:54:01.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7295 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' Apr 30 13:54:01.812: INFO: stderr: "" Apr 30 13:54:01.812: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 Apr 30 13:54:01.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7295 delete pods e2e-test-httpd-pod' Apr 30 13:54:04.388: INFO: stderr: "" Apr 30 13:54:04.388: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:04.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7295" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":6,"skipped":106,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:02.095: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-dfa9f8a5-86e0-4a75-86b2-d0190733f150 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 13:54:02.141: INFO: Waiting up to 5m0s for pod "pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a" in namespace "secrets-222" to be "Succeeded or Failed" Apr 30 13:54:02.143: INFO: Pod "pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544843ms Apr 30 13:54:04.147: INFO: Pod "pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006525011s Apr 30 13:54:06.155: INFO: Pod "pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014263957s �[1mSTEP�[0m: Saw pod success Apr 30 13:54:06.155: INFO: Pod "pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a" satisfied condition "Succeeded or Failed" Apr 30 13:54:06.158: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a container secret-env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:54:06.171: INFO: Waiting for pod pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a to disappear Apr 30 13:54:06.173: INFO: Pod pod-secrets-272bba34-c6b6-4048-a095-fc12856ead8a no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:06.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-222" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":753,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:56.728: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 30 13:53:56.759: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:53:58.921: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:08.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-224" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":24,"skipped":551,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:04.410: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-11a39865-12f4-4763-9060-230ff044ef1f �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 13:54:04.443: INFO: Waiting up to 5m0s for pod "pod-configmaps-769e8e68-d405-4848-8895-a44808777311" in namespace "configmap-9817" to be "Succeeded or Failed" Apr 30 13:54:04.448: INFO: Pod "pod-configmaps-769e8e68-d405-4848-8895-a44808777311": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420898ms Apr 30 13:54:06.452: INFO: Pod "pod-configmaps-769e8e68-d405-4848-8895-a44808777311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00894554s Apr 30 13:54:08.456: INFO: Pod "pod-configmaps-769e8e68-d405-4848-8895-a44808777311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013210453s �[1mSTEP�[0m: Saw pod success Apr 30 13:54:08.457: INFO: Pod "pod-configmaps-769e8e68-d405-4848-8895-a44808777311" satisfied condition "Succeeded or Failed" Apr 30 13:54:08.459: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-configmaps-769e8e68-d405-4848-8895-a44808777311 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:54:08.471: INFO: Waiting for pod pod-configmaps-769e8e68-d405-4848-8895-a44808777311 to disappear Apr 30 13:54:08.474: INFO: Pod pod-configmaps-769e8e68-d405-4848-8895-a44808777311 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:08.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9817" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":114,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:06.186: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:54:06.202: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 30 13:54:08.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6399 --namespace=crd-publish-openapi-6399 create -f -' Apr 30 13:54:09.212: INFO: stderr: "" Apr 30 13:54:09.212: INFO: stdout: "e2e-test-crd-publish-openapi-8642-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 30 13:54:09.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6399 --namespace=crd-publish-openapi-6399 delete e2e-test-crd-publish-openapi-8642-crds test-cr' Apr 30 13:54:09.279: INFO: stderr: "" Apr 30 13:54:09.279: INFO: stdout: "e2e-test-crd-publish-openapi-8642-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 30 13:54:09.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6399 --namespace=crd-publish-openapi-6399 apply -f -' Apr 30 13:54:09.466: INFO: stderr: "" Apr 30 13:54:09.467: INFO: stdout: "e2e-test-crd-publish-openapi-8642-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 30 13:54:09.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6399 --namespace=crd-publish-openapi-6399 delete e2e-test-crd-publish-openapi-8642-crds test-cr' Apr 30 13:54:09.552: INFO: stderr: "" Apr 30 13:54:09.552: INFO: stdout: "e2e-test-crd-publish-openapi-8642-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Apr 30 13:54:09.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6399 explain e2e-test-crd-publish-openapi-8642-crds' Apr 30 13:54:09.732: INFO: stderr: "" Apr 30 13:54:09.732: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8642-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:11.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6399" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":31,"skipped":757,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:08.505: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created Apr 30 13:54:08.533: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:10.536: INFO: The status of Pod pod-adoption-release is Running (Ready = true) �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Apr 30 13:54:11.552: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:12.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-1122" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":133,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:11.888: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-9aeb35b6-e07a-40aa-926d-85656093c360 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 13:54:11.919: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d" in namespace "projected-421" to be "Succeeded or Failed" Apr 30 13:54:11.922: INFO: Pod "pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890656ms Apr 30 13:54:13.927: INFO: Pod "pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007532557s Apr 30 13:54:15.931: INFO: Pod "pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011834736s �[1mSTEP�[0m: Saw pod success Apr 30 13:54:15.931: INFO: Pod "pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d" satisfied condition "Succeeded or Failed" Apr 30 13:54:15.934: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:54:15.948: INFO: Waiting for pod pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d to disappear Apr 30 13:54:15.950: INFO: Pod pod-projected-secrets-c879fea5-96dd-4d74-a502-530788236f8d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:15.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-421" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":789,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:16.004: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1573 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 30 13:54:16.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2920 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Apr 30 13:54:16.093: INFO: stderr: "" Apr 30 13:54:16.093: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Apr 30 13:54:21.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2920 get pod e2e-test-httpd-pod -o json' Apr 30 13:54:21.208: INFO: stderr: "" Apr 30 13:54:21.208: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2022-04-30T13:54:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2920\",\n \"resourceVersion\": \"12077\",\n \"uid\": \"3b8d759e-460b-4153-92d3-8ab2f3389096\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-l5t4n\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-i77nai-worker-o9uwcm\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-l5t4n\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-30T13:54:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-30T13:54:17Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-30T13:54:17Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-30T13:54:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://42eb94cb41bbf7e4d0384baa0f6cb4ef4baa2cb830d24160fc5660b58cb5b65c\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-04-30T13:54:16Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.6.94\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.6.94\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-04-30T13:54:16Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Apr 30 13:54:21.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2920 replace -f -' Apr 30 13:54:21.740: INFO: stderr: "" Apr 30 13:54:21.740: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 Apr 30 13:54:21.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2920 delete pods e2e-test-httpd-pod' Apr 30 13:54:23.456: INFO: stderr: "" Apr 30 13:54:23.456: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:23.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2920" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":33,"skipped":828,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:23.477: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/discovery.k8s.io �[1mSTEP�[0m: getting /apis/discovery.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 30 13:54:23.518: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 30 13:54:23.521: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 30 13:54:23.533: INFO: waiting for watch events with expected annotations Apr 30 13:54:23.533: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:23.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-7761" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":34,"skipped":838,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:23.622: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name projected-secret-test-cc567c69-8c41-458f-b6c3-5605aa60da4e �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 13:54:23.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827" in namespace "projected-3172" to be "Succeeded or Failed" Apr 30 13:54:23.652: INFO: Pod "pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302589ms Apr 30 13:54:25.662: INFO: Pod "pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012324182s Apr 30 13:54:27.666: INFO: Pod "pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016875709s �[1mSTEP�[0m: Saw pod success Apr 30 13:54:27.666: INFO: Pod "pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827" satisfied condition "Succeeded or Failed" Apr 30 13:54:27.669: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:54:27.684: INFO: Waiting for pod pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827 to disappear Apr 30 13:54:27.686: INFO: Pod pod-projected-secrets-6bed423d-0b69-4b5f-b676-c66315d92827 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:27.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3172" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":892,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:12.597: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-711f4e0a-3f0b-49e8-9f37-b10cf40e5055 in namespace container-probe-7479 Apr 30 13:54:14.646: INFO: Started pod liveness-711f4e0a-3f0b-49e8-9f37-b10cf40e5055 in namespace container-probe-7479 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 30 13:54:14.649: INFO: Initial restart count of pod liveness-711f4e0a-3f0b-49e8-9f37-b10cf40e5055 is 0 Apr 30 13:54:34.712: INFO: Restart count of pod container-probe-7479/liveness-711f4e0a-3f0b-49e8-9f37-b10cf40e5055 is now 1 (20.06331708s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:34.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7479" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":134,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:34.883: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-2fc576d1-8c85-4016-b93d-43c8b9c0afe1 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:37.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5488" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":168,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:27.702: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:54:27.732: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 30 13:54:31.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7009 --namespace=crd-publish-openapi-7009 create -f -' Apr 30 13:54:33.710: INFO: stderr: "" Apr 30 13:54:33.711: INFO: stdout: "e2e-test-crd-publish-openapi-2842-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 30 13:54:33.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7009 --namespace=crd-publish-openapi-7009 delete e2e-test-crd-publish-openapi-2842-crds test-cr' Apr 30 13:54:33.896: INFO: stderr: "" Apr 30 13:54:33.896: INFO: stdout: "e2e-test-crd-publish-openapi-2842-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 30 13:54:33.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7009 --namespace=crd-publish-openapi-7009 apply -f -' Apr 30 13:54:34.410: INFO: stderr: "" Apr 30 13:54:34.410: INFO: stdout: "e2e-test-crd-publish-openapi-2842-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 30 13:54:34.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7009 --namespace=crd-publish-openapi-7009 delete e2e-test-crd-publish-openapi-2842-crds test-cr' Apr 30 13:54:34.606: INFO: stderr: "" Apr 30 13:54:34.606: INFO: stdout: "e2e-test-crd-publish-openapi-2842-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Apr 30 13:54:34.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7009 explain e2e-test-crd-publish-openapi-2842-crds' Apr 30 13:54:35.092: INFO: stderr: "" Apr 30 13:54:35.092: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2842-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:39.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7009" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":36,"skipped":897,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:39.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted Apr 30 13:54:51.214: INFO: 69 pods remaining Apr 30 13:54:51.214: INFO: 69 pods has nil DeletionTimestamp Apr 30 13:54:51.214: INFO: �[1mSTEP�[0m: Gathering metrics Apr 30 13:54:56.362: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-i77nai-control-plane-r7q6n is Running (Ready = true) Apr 30 13:54:56.884: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 30 13:54:56.884: INFO: Deleting pod "simpletest-rc-to-be-deleted-24t8t" in namespace "gc-4484" Apr 30 13:54:56.907: INFO: Deleting pod "simpletest-rc-to-be-deleted-28ll8" in namespace "gc-4484" Apr 30 13:54:56.934: INFO: Deleting pod "simpletest-rc-to-be-deleted-2b4dt" in namespace "gc-4484" Apr 30 13:54:57.000: INFO: Deleting pod "simpletest-rc-to-be-deleted-2b6n9" in namespace "gc-4484" Apr 30 13:54:57.034: INFO: Deleting pod "simpletest-rc-to-be-deleted-2jrs6" in namespace "gc-4484" Apr 30 13:54:57.072: INFO: Deleting pod "simpletest-rc-to-be-deleted-2vj7k" in namespace "gc-4484" Apr 30 13:54:57.096: INFO: Deleting pod "simpletest-rc-to-be-deleted-47b5x" in namespace "gc-4484" Apr 30 13:54:57.143: INFO: Deleting pod "simpletest-rc-to-be-deleted-47gnq" in namespace "gc-4484" Apr 30 13:54:57.169: INFO: Deleting pod "simpletest-rc-to-be-deleted-4dptj" in namespace "gc-4484" Apr 30 13:54:57.243: INFO: Deleting pod "simpletest-rc-to-be-deleted-4jf4f" in namespace "gc-4484" Apr 30 13:54:57.259: INFO: Deleting pod "simpletest-rc-to-be-deleted-5g7sk" in namespace "gc-4484" Apr 30 13:54:57.346: INFO: Deleting pod "simpletest-rc-to-be-deleted-5sl68" in namespace "gc-4484" Apr 30 13:54:57.355: INFO: Deleting pod "simpletest-rc-to-be-deleted-6c4ck" in namespace "gc-4484" Apr 30 13:54:57.368: INFO: Deleting pod "simpletest-rc-to-be-deleted-6q529" in namespace "gc-4484" Apr 30 13:54:57.390: INFO: Deleting pod "simpletest-rc-to-be-deleted-6r5p6" in namespace "gc-4484" Apr 30 13:54:57.445: INFO: Deleting pod "simpletest-rc-to-be-deleted-76bvk" in namespace "gc-4484" Apr 30 13:54:57.461: INFO: Deleting pod "simpletest-rc-to-be-deleted-7ft6s" in namespace "gc-4484" Apr 30 13:54:57.481: INFO: Deleting pod "simpletest-rc-to-be-deleted-7gwq4" in namespace "gc-4484" Apr 30 13:54:57.503: INFO: Deleting pod "simpletest-rc-to-be-deleted-7k6wp" in namespace "gc-4484" Apr 30 13:54:57.540: INFO: Deleting pod "simpletest-rc-to-be-deleted-7ms9t" in namespace "gc-4484" Apr 30 13:54:57.610: INFO: Deleting pod "simpletest-rc-to-be-deleted-7tdwf" in namespace "gc-4484" Apr 30 13:54:57.651: INFO: Deleting pod "simpletest-rc-to-be-deleted-8djkg" in namespace "gc-4484" Apr 30 13:54:57.686: INFO: Deleting pod "simpletest-rc-to-be-deleted-8m5z6" in namespace "gc-4484" Apr 30 13:54:57.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-8tjsq" in namespace "gc-4484" Apr 30 13:54:57.779: INFO: Deleting pod "simpletest-rc-to-be-deleted-9fgx5" in namespace "gc-4484" Apr 30 13:54:57.841: INFO: Deleting pod "simpletest-rc-to-be-deleted-9g9d6" in namespace "gc-4484" Apr 30 13:54:57.972: INFO: Deleting pod "simpletest-rc-to-be-deleted-9ghjl" in namespace "gc-4484" Apr 30 13:54:58.088: INFO: Deleting pod "simpletest-rc-to-be-deleted-b2ssl" in namespace "gc-4484" Apr 30 13:54:58.168: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjzl7" in namespace "gc-4484" Apr 30 13:54:58.231: INFO: Deleting pod "simpletest-rc-to-be-deleted-blpdv" in namespace "gc-4484" Apr 30 13:54:58.245: INFO: Deleting pod "simpletest-rc-to-be-deleted-bp8sw" in namespace "gc-4484" Apr 30 13:54:58.261: INFO: Deleting pod "simpletest-rc-to-be-deleted-c5sth" in namespace "gc-4484" Apr 30 13:54:58.315: INFO: Deleting pod "simpletest-rc-to-be-deleted-cjjs5" in namespace "gc-4484" Apr 30 13:54:58.372: INFO: Deleting pod "simpletest-rc-to-be-deleted-cq4h6" in namespace "gc-4484" Apr 30 13:54:58.410: INFO: Deleting pod "simpletest-rc-to-be-deleted-csqk2" in namespace "gc-4484" Apr 30 13:54:58.433: INFO: Deleting pod "simpletest-rc-to-be-deleted-cznqf" in namespace "gc-4484" Apr 30 13:54:58.471: INFO: Deleting pod "simpletest-rc-to-be-deleted-df5dm" in namespace "gc-4484" Apr 30 13:54:58.487: INFO: Deleting pod "simpletest-rc-to-be-deleted-dkmnq" in namespace "gc-4484" Apr 30 13:54:58.505: INFO: Deleting pod "simpletest-rc-to-be-deleted-dtfxx" in namespace "gc-4484" Apr 30 13:54:58.575: INFO: Deleting pod "simpletest-rc-to-be-deleted-dtjdl" in namespace "gc-4484" Apr 30 13:54:58.610: INFO: Deleting pod "simpletest-rc-to-be-deleted-f5d69" in namespace "gc-4484" Apr 30 13:54:58.629: INFO: Deleting pod "simpletest-rc-to-be-deleted-fg6gr" in namespace "gc-4484" Apr 30 13:54:58.670: INFO: Deleting pod "simpletest-rc-to-be-deleted-fk7kz" in namespace "gc-4484" Apr 30 13:54:58.703: INFO: Deleting pod "simpletest-rc-to-be-deleted-fxkn5" in namespace "gc-4484" Apr 30 13:54:58.751: INFO: Deleting pod "simpletest-rc-to-be-deleted-fxmss" in namespace "gc-4484" Apr 30 13:54:58.847: INFO: Deleting pod "simpletest-rc-to-be-deleted-gflxw" in namespace "gc-4484" Apr 30 13:54:58.989: INFO: Deleting pod "simpletest-rc-to-be-deleted-grc7f" in namespace "gc-4484" Apr 30 13:54:59.132: INFO: Deleting pod "simpletest-rc-to-be-deleted-h4w9k" in namespace "gc-4484" Apr 30 13:54:59.229: INFO: Deleting pod "simpletest-rc-to-be-deleted-h9hvw" in namespace "gc-4484" Apr 30 13:54:59.341: INFO: Deleting pod "simpletest-rc-to-be-deleted-hgrb6" in namespace "gc-4484" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:54:59.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4484" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":37,"skipped":924,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:59.443: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token Apr 30 13:55:00.145: INFO: created pod pod-service-account-defaultsa Apr 30 13:55:00.145: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 30 13:55:00.156: INFO: created pod pod-service-account-mountsa Apr 30 13:55:00.157: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 30 13:55:00.177: INFO: created pod pod-service-account-nomountsa Apr 30 13:55:00.177: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 30 13:55:00.187: INFO: created pod pod-service-account-defaultsa-mountspec Apr 30 13:55:00.187: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 30 13:55:00.195: INFO: created pod pod-service-account-mountsa-mountspec Apr 30 13:55:00.195: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 30 13:55:00.219: INFO: created pod pod-service-account-nomountsa-mountspec Apr 30 13:55:00.219: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 30 13:55:00.243: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 30 13:55:00.243: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 30 13:55:00.250: INFO: created pod pod-service-account-mountsa-nomountspec Apr 30 13:55:00.250: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 30 13:55:00.268: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 30 13:55:00.268: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:00.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-2857" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":38,"skipped":936,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:53:51.858: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-afd063ad-813b-4547-ac1b-b05ab51b9d3c �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-a3635aa1-9f2c-4803-954f-4e5e85c166c4 �[1mSTEP�[0m: Creating the pod Apr 30 13:53:51.898: INFO: The status of Pod pod-secrets-b235f1cc-599c-4bd1-aa36-b07f18cf89e0 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:53:53.901: INFO: The status of Pod pod-secrets-b235f1cc-599c-4bd1-aa36-b07f18cf89e0 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:53:55.903: INFO: The status of Pod pod-secrets-b235f1cc-599c-4bd1-aa36-b07f18cf89e0 is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-afd063ad-813b-4547-ac1b-b05ab51b9d3c �[1mSTEP�[0m: Updating secret s-test-opt-upd-a3635aa1-9f2c-4803-954f-4e5e85c166c4 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-e49fccd5-84a2-4270-8ce3-6ae239542ab2 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:08.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8580" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":575,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:00.400: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-537b1414-ae00-48ac-a82d-24975c1a3752 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 13:55:00.500: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5" in namespace "configmap-6057" to be "Succeeded or Failed" Apr 30 13:55:00.511: INFO: Pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.890928ms Apr 30 13:55:02.558: INFO: Pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057668009s Apr 30 13:55:04.561: INFO: Pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061002668s Apr 30 13:55:06.566: INFO: Pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066202357s Apr 30 13:55:08.573: INFO: Pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073537391s Apr 30 13:55:10.586: INFO: Pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086385414s �[1mSTEP�[0m: Saw pod success Apr 30 13:55:10.586: INFO: Pod "pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5" satisfied condition "Succeeded or Failed" Apr 30 13:55:10.592: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-a2pwxc pod pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:55:10.946: INFO: Waiting for pod pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5 to disappear Apr 30 13:55:10.954: INFO: Pod pod-configmaps-7ae2b91a-c7ec-44f1-81d0-9272ef7627f5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:10.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6057" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":955,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:37.114: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-860 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-860 to expose endpoints map[] Apr 30 13:54:37.217: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Apr 30 13:54:38.226: INFO: successfully validated that service multi-endpoint-test in namespace services-860 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-860 Apr 30 13:54:38.235: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:40.240: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-860 to expose endpoints map[pod1:[100]] Apr 30 13:54:40.261: INFO: successfully validated that service multi-endpoint-test in namespace services-860 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-860 Apr 30 13:54:40.270: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:42.274: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:44.277: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:46.274: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:48.277: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:50.293: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:52.283: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:54.281: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:54:56.302: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-860 to expose endpoints map[pod1:[100] pod2:[101]] Apr 30 13:54:56.384: INFO: successfully validated that service multi-endpoint-test in namespace services-860 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Apr 30 13:54:56.384: INFO: Creating new exec pod Apr 30 13:55:09.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-860 exec execpodq4jrp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 30 13:55:10.622: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 30 13:55:10.622: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:55:10.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-860 exec execpodq4jrp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.136.145.184 80' Apr 30 13:55:10.978: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.136.145.184 80\nConnection to 10.136.145.184 80 port [tcp/http] succeeded!\n" Apr 30 13:55:10.979: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:55:10.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-860 exec execpodq4jrp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Apr 30 13:55:11.365: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Apr 30 13:55:11.365: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:55:11.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-860 exec execpodq4jrp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.136.145.184 81' Apr 30 13:55:11.655: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.136.145.184 81\nConnection to 10.136.145.184 81 port [tcp/*] succeeded!\n" Apr 30 13:55:11.655: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-860 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-860 to expose endpoints map[pod2:[101]] Apr 30 13:55:11.704: INFO: successfully validated that service multi-endpoint-test in namespace services-860 exposes endpoints map[pod2:[101]] �[1mSTEP�[0m: Deleting pod pod2 in namespace services-860 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-860 to expose endpoints map[] Apr 30 13:55:11.753: INFO: successfully validated that service multi-endpoint-test in namespace services-860 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:11.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-860" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":11,"skipped":178,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:11.002: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-457b4faa-b5a4-4712-aee7-cd36cdad7ecf �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 13:55:11.159: INFO: Waiting up to 5m0s for pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656" in namespace "secrets-9358" to be "Succeeded or Failed" Apr 30 13:55:11.176: INFO: Pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656": Phase="Pending", Reason="", readiness=false. Elapsed: 16.797065ms Apr 30 13:55:13.181: INFO: Pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022007307s Apr 30 13:55:15.194: INFO: Pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035079272s Apr 30 13:55:17.200: INFO: Pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04097199s Apr 30 13:55:19.205: INFO: Pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046074664s Apr 30 13:55:21.210: INFO: Pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050806293s �[1mSTEP�[0m: Saw pod success Apr 30 13:55:21.212: INFO: Pod "pod-secrets-054eef34-ab9b-4156-8e15-d59214311656" satisfied condition "Succeeded or Failed" Apr 30 13:55:21.214: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-secrets-054eef34-ab9b-4156-8e15-d59214311656 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:55:21.227: INFO: Waiting for pod pod-secrets-054eef34-ab9b-4156-8e15-d59214311656 to disappear Apr 30 13:55:21.229: INFO: Pod pod-secrets-054eef34-ab9b-4156-8e15-d59214311656 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:21.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9358" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-4804" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":961,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:11.822: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 30 13:55:11.956: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:13.961: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:15.963: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 30 13:55:15.986: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:17.999: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:19.995: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 30 13:55:20.012: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 30 13:55:20.019: INFO: Pod pod-with-prestop-http-hook still exists Apr 30 13:55:22.019: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 30 13:55:22.023: INFO: Pod pod-with-prestop-http-hook still exists Apr 30 13:55:24.020: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 30 13:55:24.024: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:24.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-4034" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":180,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:24.047: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-95b79c0d-7575-438a-b4b3-914f038fc075 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:24.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8227" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":13,"skipped":184,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:21.249: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running Apr 30 13:55:23.332: INFO: running pods: 0 < 3 Apr 30 13:55:25.340: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:27.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2370" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":41,"skipped":964,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:24.234: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:55:25.230: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 30 13:55:27.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 55, 25, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 55, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 55, 25, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 55, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:55:30.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:30.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7914" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7914-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":14,"skipped":247,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:27.378: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 30 13:55:27.434: INFO: Waiting up to 5m0s for pod "pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f" in namespace "emptydir-4237" to be "Succeeded or Failed" Apr 30 13:55:27.442: INFO: Pod "pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.417646ms Apr 30 13:55:29.455: INFO: Pod "pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02071396s Apr 30 13:55:31.461: INFO: Pod "pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f": Phase="Running", Reason="", readiness=true. Elapsed: 4.026305034s Apr 30 13:55:33.465: INFO: Pod "pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f": Phase="Running", Reason="", readiness=false. Elapsed: 6.030429022s Apr 30 13:55:35.469: INFO: Pod "pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034944041s �[1mSTEP�[0m: Saw pod success Apr 30 13:55:35.469: INFO: Pod "pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f" satisfied condition "Succeeded or Failed" Apr 30 13:55:35.473: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx pod pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:55:35.502: INFO: Waiting for pod pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f to disappear Apr 30 13:55:35.505: INFO: Pod pod-3b9fa4f4-6e36-40b3-841e-9d5d3013279f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:35.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4237" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":980,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:30.651: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:55:31.506: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 30 13:55:33.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 55, 31, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 55, 31, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 55, 31, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 55, 31, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:55:36.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:36.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4035" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4035-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":15,"skipped":266,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:36.776: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:55:36.825: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 30 13:55:40.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2314 --namespace=crd-publish-openapi-2314 create -f -' Apr 30 13:55:41.661: INFO: stderr: "" Apr 30 13:55:41.661: INFO: stdout: "e2e-test-crd-publish-openapi-5958-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 30 13:55:41.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2314 --namespace=crd-publish-openapi-2314 delete e2e-test-crd-publish-openapi-5958-crds test-cr' Apr 30 13:55:41.846: INFO: stderr: "" Apr 30 13:55:41.846: INFO: stdout: "e2e-test-crd-publish-openapi-5958-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 30 13:55:41.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2314 --namespace=crd-publish-openapi-2314 apply -f -' Apr 30 13:55:42.277: INFO: stderr: "" Apr 30 13:55:42.277: INFO: stdout: "e2e-test-crd-publish-openapi-5958-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 30 13:55:42.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2314 --namespace=crd-publish-openapi-2314 delete e2e-test-crd-publish-openapi-5958-crds test-cr' Apr 30 13:55:42.464: INFO: stderr: "" Apr 30 13:55:42.464: INFO: stdout: "e2e-test-crd-publish-openapi-5958-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR without validation schema Apr 30 13:55:42.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2314 explain e2e-test-crd-publish-openapi-5958-crds' Apr 30 13:55:42.882: INFO: stderr: "" Apr 30 13:55:42.883: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5958-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:45.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-2314" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":16,"skipped":275,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:45.466: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:45.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-3980" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":17,"skipped":298,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:45.530: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:55:45.558: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:55:46.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-5749" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":18,"skipped":303,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:46.649: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:55:46.675: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Apr 30 13:55:50.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 create -f -' Apr 30 13:55:52.157: INFO: stderr: "" Apr 30 13:55:52.157: INFO: stdout: "e2e-test-crd-publish-openapi-6449-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 30 13:55:52.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 delete e2e-test-crd-publish-openapi-6449-crds test-foo' Apr 30 13:55:52.312: INFO: stderr: "" Apr 30 13:55:52.312: INFO: stdout: "e2e-test-crd-publish-openapi-6449-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 30 13:55:52.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 apply -f -' Apr 30 13:55:52.727: INFO: stderr: "" Apr 30 13:55:52.727: INFO: stdout: "e2e-test-crd-publish-openapi-6449-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 30 13:55:52.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 delete e2e-test-crd-publish-openapi-6449-crds test-foo' Apr 30 13:55:52.883: INFO: stderr: "" Apr 30 13:55:52.883: INFO: stdout: "e2e-test-crd-publish-openapi-6449-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Apr 30 13:55:52.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 create -f -' Apr 30 13:55:53.306: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 30 13:55:53.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 create -f -' Apr 30 13:55:53.759: INFO: rc: 1 Apr 30 13:55:53.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 apply -f -' Apr 30 13:55:54.100: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Apr 30 13:55:54.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 create -f -' Apr 30 13:55:54.532: INFO: rc: 1 Apr 30 13:55:54.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 --namespace=crd-publish-openapi-6065 apply -f -' Apr 30 13:55:54.966: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Apr 30 13:55:54.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 explain e2e-test-crd-publish-openapi-6449-crds' Apr 30 13:55:55.338: INFO: stderr: "" Apr 30 13:55:55.338: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6449-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Apr 30 13:55:55.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 explain e2e-test-crd-publish-openapi-6449-crds.metadata' Apr 30 13:55:55.815: INFO: stderr: "" Apr 30 13:55:55.815: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6449-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 30 13:55:55.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 explain e2e-test-crd-publish-openapi-6449-crds.spec' Apr 30 13:55:57.040: INFO: stderr: "" Apr 30 13:55:57.040: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6449-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 30 13:55:57.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 explain e2e-test-crd-publish-openapi-6449-crds.spec.bars' Apr 30 13:55:57.447: INFO: stderr: "" Apr 30 13:55:57.447: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6449-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Apr 30 13:55:57.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6065 explain e2e-test-crd-publish-openapi-6449-crds.spec.bars2' Apr 30 13:55:57.897: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:00.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6065" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":19,"skipped":327,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:35.580: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Apr 30 13:55:35.719: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:37.727: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:39.725: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:41.728: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.5 on the node which pod1 resides and expect scheduled Apr 30 13:55:41.750: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:43.757: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:45.755: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:47.762: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:49.758: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.5 but use UDP protocol on the node which pod2 resides Apr 30 13:55:49.772: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:51.778: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:53.781: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:55.778: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:57.780: INFO: The status of Pod pod3 is Running (Ready = true) Apr 30 13:55:57.796: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:59.802: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:56:01.800: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:56:03.801: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Apr 30 13:56:03.804: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.5 http://127.0.0.1:54323/hostname] Namespace:hostport-1599 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:56:03.804: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:56:03.805: INFO: ExecWithOptions: Clientset creation Apr 30 13:56:03.805: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-1599/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.5+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.5, port: 54323 Apr 30 13:56:03.937: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.5:54323/hostname] Namespace:hostport-1599 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:56:03.937: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:56:03.938: INFO: ExecWithOptions: Clientset creation Apr 30 13:56:03.938: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-1599/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.5%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.5, port: 54323 UDP Apr 30 13:56:04.082: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.5 54323] Namespace:hostport-1599 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:56:04.082: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:56:04.083: INFO: ExecWithOptions: Clientset creation Apr 30 13:56:04.083: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-1599/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.5+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:09.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-1599" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":1006,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:55:08.749: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-63ed0b03-b2a7-4a85-ad2c-afe1dca602c2 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-a3a59c0f-d0f9-497e-b1f1-8de496872c40 �[1mSTEP�[0m: Creating the pod Apr 30 13:55:08.825: INFO: The status of Pod pod-configmaps-e8844c4e-5eb9-4cfb-80e9-fec23a6c165f is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:10.834: INFO: The status of Pod pod-configmaps-e8844c4e-5eb9-4cfb-80e9-fec23a6c165f is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:12.832: INFO: The status of Pod pod-configmaps-e8844c4e-5eb9-4cfb-80e9-fec23a6c165f is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:55:14.829: INFO: The status of Pod pod-configmaps-e8844c4e-5eb9-4cfb-80e9-fec23a6c165f is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-63ed0b03-b2a7-4a85-ad2c-afe1dca602c2 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-a3a59c0f-d0f9-497e-b1f1-8de496872c40 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-775461d6-f567-4ff9-bc34-ddb699c66207 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:21.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5447" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":604,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:21.261: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 30 13:56:24.300: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:24.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-2262" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":610,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:24.362: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap that has name configmap-test-emptyKey-163d30fc-2565-4c45-8c7a-89edfaec1f76 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:24.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9736" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":34,"skipped":650,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:24.415: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Apr 30 13:56:24.439: INFO: The status of Pod annotationupdate0a2445f4-2bc1-4d67-8dd9-dd5358dc0e84 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:56:26.445: INFO: The status of Pod annotationupdate0a2445f4-2bc1-4d67-8dd9-dd5358dc0e84 is Running (Ready = true) Apr 30 13:56:26.964: INFO: Successfully updated pod "annotationupdate0a2445f4-2bc1-4d67-8dd9-dd5358dc0e84" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:30.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9773" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":663,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:31.021: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Apr 30 13:56:31.054: INFO: Waiting up to 5m0s for pod "pod-2fd80481-614b-46a3-b523-e315a38a0929" in namespace "emptydir-4985" to be "Succeeded or Failed" Apr 30 13:56:31.057: INFO: Pod "pod-2fd80481-614b-46a3-b523-e315a38a0929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75909ms Apr 30 13:56:33.062: INFO: Pod "pod-2fd80481-614b-46a3-b523-e315a38a0929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007576206s Apr 30 13:56:35.066: INFO: Pod "pod-2fd80481-614b-46a3-b523-e315a38a0929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012212968s �[1mSTEP�[0m: Saw pod success Apr 30 13:56:35.066: INFO: Pod "pod-2fd80481-614b-46a3-b523-e315a38a0929" satisfied condition "Succeeded or Failed" Apr 30 13:56:35.069: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-2fd80481-614b-46a3-b523-e315a38a0929 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:56:35.082: INFO: Waiting for pod pod-2fd80481-614b-46a3-b523-e315a38a0929 to disappear Apr 30 13:56:35.084: INFO: Pod pod-2fd80481-614b-46a3-b523-e315a38a0929 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:35.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4985" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":684,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:35.139: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:35.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-1498" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":37,"skipped":718,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:35.218: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:35.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-5713" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":38,"skipped":746,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:35.272: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Apr 30 13:56:35.297: INFO: Waiting up to 5m0s for pod "pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a" in namespace "emptydir-3741" to be "Succeeded or Failed" Apr 30 13:56:35.301: INFO: Pod "pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.833798ms Apr 30 13:56:37.304: INFO: Pod "pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007531653s Apr 30 13:56:39.309: INFO: Pod "pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012189955s �[1mSTEP�[0m: Saw pod success Apr 30 13:56:39.309: INFO: Pod "pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a" satisfied condition "Succeeded or Failed" Apr 30 13:56:39.312: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:56:39.325: INFO: Waiting for pod pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a to disappear Apr 30 13:56:39.327: INFO: Pod pod-df77f1f5-3e79-452a-92ed-3f2fd01ad65a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:39.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3741" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":755,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:39.343: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-f5278b7c-b15f-44a6-89e6-4ee1cf5b7cf5 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 13:56:39.374: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024" in namespace "projected-2216" to be "Succeeded or Failed" Apr 30 13:56:39.377: INFO: Pod "pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134173ms Apr 30 13:56:41.381: INFO: Pod "pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024": Phase="Running", Reason="", readiness=false. Elapsed: 2.006116044s Apr 30 13:56:43.385: INFO: Pod "pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010642324s �[1mSTEP�[0m: Saw pod success Apr 30 13:56:43.385: INFO: Pod "pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024" satisfied condition "Succeeded or Failed" Apr 30 13:56:43.388: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:56:43.400: INFO: Waiting for pod pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024 to disappear Apr 30 13:56:43.402: INFO: Pod pod-projected-configmaps-0d8342ca-4e67-4073-bdfd-d3000200c024 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:43.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2216" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":761,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:43.444: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename tables �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:43.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "tables-4717" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":41,"skipped":793,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:43.537: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:56:43.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec" in namespace "downward-api-8589" to be "Succeeded or Failed" Apr 30 13:56:43.564: INFO: Pod "downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087444ms Apr 30 13:56:45.569: INFO: Pod "downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006452423s Apr 30 13:56:47.571: INFO: Pod "downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009208911s �[1mSTEP�[0m: Saw pod success Apr 30 13:56:47.572: INFO: Pod "downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec" satisfied condition "Succeeded or Failed" Apr 30 13:56:47.574: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:56:47.585: INFO: Waiting for pod downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec to disappear Apr 30 13:56:47.587: INFO: Pod downwardapi-volume-9fd50b43-695d-4d5e-a1b9-04238e3ef0ec no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:47.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8589" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":846,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:47.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 30 13:56:51.722: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:56:51.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-8365" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":895,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:51.750: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-8094 �[1mSTEP�[0m: creating service affinity-nodeport in namespace services-8094 �[1mSTEP�[0m: creating replication controller affinity-nodeport in namespace services-8094 I0430 13:56:51.786120 21 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-8094, replica count: 3 I0430 13:56:54.837666 21 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 30 13:56:54.846: INFO: Creating new exec pod Apr 30 13:56:57.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8094 exec execpod-affinitygnp57 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Apr 30 13:56:58.024: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Apr 30 13:56:58.024: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:56:58.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8094 exec execpod-affinitygnp57 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.4.102 80' Apr 30 13:56:58.197: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.4.102 80\nConnection to 10.140.4.102 80 port [tcp/http] succeeded!\n" Apr 30 13:56:58.197: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:56:58.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8094 exec execpod-affinitygnp57 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 31063' Apr 30 13:56:58.343: INFO: stderr: "+ + ncecho -v -t hostName -w 2\n 172.18.0.6 31063\nConnection to 172.18.0.6 31063 port [tcp/*] succeeded!\n" Apr 30 13:56:58.343: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:56:58.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8094 exec execpod-affinitygnp57 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31063' Apr 30 13:56:58.493: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 31063\nConnection to 172.18.0.7 31063 port [tcp/*] succeeded!\n" Apr 30 13:56:58.493: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:56:58.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8094 exec execpod-affinitygnp57 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.6:31063/ ; done' Apr 30 13:56:58.733: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.6:31063/\n" Apr 30 13:56:58.733: INFO: stdout: "\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb\naffinity-nodeport-krzpb" Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Received response from host: affinity-nodeport-krzpb Apr 30 13:56:58.733: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport in namespace services-8094, will wait for the garbage collector to delete the pods Apr 30 13:56:58.802: INFO: Deleting ReplicationController affinity-nodeport took: 5.457543ms Apr 30 13:56:58.902: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.666008ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:00.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8094" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":44,"skipped":904,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:00.462: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-6436 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-6436 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-6436 Apr 30 13:56:00.579: INFO: Found 0 stateful pods, waiting for 1 Apr 30 13:56:10.599: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 30 13:56:10.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 30 13:56:10.809: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 30 13:56:10.809: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 30 13:56:10.809: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 30 13:56:10.818: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 30 13:56:20.824: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 30 13:56:20.824: INFO: Waiting for statefulset status.replicas updated to 0 Apr 30 13:56:20.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999976s Apr 30 13:56:21.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99671458s Apr 30 13:56:22.844: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991663743s Apr 30 13:56:23.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987836272s Apr 30 13:56:24.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983597164s Apr 30 13:56:25.857: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979036857s Apr 30 13:56:26.861: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974890777s Apr 30 13:56:27.865: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.971038686s Apr 30 13:56:28.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.967157424s Apr 30 13:56:29.873: INFO: Verifying statefulset ss doesn't scale past 1 for another 963.038892ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6436 Apr 30 13:56:30.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 30 13:56:31.044: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 30 13:56:31.044: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 30 13:56:31.044: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 30 13:56:31.049: INFO: Found 1 stateful pods, waiting for 3 Apr 30 13:56:41.054: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 30 13:56:41.054: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 30 13:56:41.054: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Apr 30 13:56:41.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 30 13:56:41.222: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 30 13:56:41.222: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 30 13:56:41.222: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 30 13:56:41.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 30 13:56:41.386: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 30 13:56:41.386: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 30 13:56:41.386: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 30 13:56:41.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 30 13:56:41.549: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 30 13:56:41.549: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 30 13:56:41.549: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 30 13:56:41.549: INFO: Waiting for statefulset status.replicas updated to 0 Apr 30 13:56:41.552: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 30 13:56:51.560: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 30 13:56:51.560: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 30 13:56:51.560: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 30 13:56:51.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999638s Apr 30 13:56:52.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996994962s Apr 30 13:56:53.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993115745s Apr 30 13:56:54.581: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989050697s Apr 30 13:56:55.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984691849s Apr 30 13:56:56.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980513451s Apr 30 13:56:57.593: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976745422s Apr 30 13:56:58.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.97274718s Apr 30 13:56:59.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.968401917s Apr 30 13:57:00.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 964.403311ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6436 Apr 30 13:57:01.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 30 13:57:01.773: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 30 13:57:01.773: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 30 13:57:01.773: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 30 13:57:01.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 30 13:57:01.927: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 30 13:57:01.927: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 30 13:57:01.927: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 30 13:57:01.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6436 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 30 13:57:02.072: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 30 13:57:02.072: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 30 13:57:02.072: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 30 13:57:02.072: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 30 13:57:12.086: INFO: Deleting all statefulset in ns statefulset-6436 Apr 30 13:57:12.089: INFO: Scaling statefulset ss to 0 Apr 30 13:57:12.097: INFO: Waiting for statefulset status.replicas updated to 0 Apr 30 13:57:12.100: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:12.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6436" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":20,"skipped":343,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:00.537: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-7655 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-7655 Apr 30 13:57:00.566: INFO: Found 0 stateful pods, waiting for 1 Apr 30 13:57:10.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 30 13:57:10.595: INFO: Deleting all statefulset in ns statefulset-7655 Apr 30 13:57:10.599: INFO: Scaling statefulset ss to 0 Apr 30 13:57:20.615: INFO: Waiting for statefulset status.replicas updated to 0 Apr 30 13:57:20.617: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:20.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-7655" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":45,"skipped":909,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:20.665: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 30 13:57:20.692: INFO: The status of Pod pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:57:22.696: INFO: The status of Pod pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Apr 30 13:57:23.213: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b" Apr 30 13:57:23.213: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b" in namespace "pods-32" to be "terminated due to deadline exceeded" Apr 30 13:57:23.216: INFO: Pod "pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b": Phase="Running", Reason="", readiness=true. Elapsed: 2.602717ms Apr 30 13:57:25.219: INFO: Pod "pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b": Phase="Running", Reason="", readiness=true. Elapsed: 2.006121734s Apr 30 13:57:27.222: INFO: Pod "pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.009283141s Apr 30 13:57:27.222: INFO: Pod "pod-update-activedeadlineseconds-d8a6fc9e-caee-4c34-905e-d0c66ceb254b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:27.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-32" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":928,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:27.273: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:57:27.297: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:33.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3855" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":47,"skipped":957,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:33.481: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 30 13:57:36.522: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:36.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-4142" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":965,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:12.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: referencing a single matching pod �[1mSTEP�[0m: referencing matching pods with named port �[1mSTEP�[0m: creating empty Endpoints and EndpointSlices for no matching Pods �[1mSTEP�[0m: recreating EndpointSlices after they've been deleted Apr 30 13:57:32.267: INFO: EndpointSlice for Service endpointslice-459/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:42.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-459" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":21,"skipped":352,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:42.288: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting the proxy server Apr 30 13:57:42.304: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3045 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:42.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3045" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":22,"skipped":354,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:42.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:44.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-3914" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":356,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:36.553: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:57:36.578: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 30 13:57:41.582: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 30 13:57:41.582: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 30 13:57:43.586: INFO: Creating deployment "test-rollover-deployment" Apr 30 13:57:43.592: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 30 13:57:45.600: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 30 13:57:45.608: INFO: Ensure that both replica sets have 1 created replica Apr 30 13:57:45.620: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 30 13:57:45.628: INFO: Updating deployment test-rollover-deployment Apr 30 13:57:45.628: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 30 13:57:47.640: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 30 13:57:47.650: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 30 13:57:47.659: INFO: all replica sets need to contain the pod-template-hash label Apr 30 13:57:47.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 47, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:57:49.667: INFO: all replica sets need to contain the pod-template-hash label Apr 30 13:57:49.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 47, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:57:51.666: INFO: all replica sets need to contain the pod-template-hash label Apr 30 13:57:51.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 47, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:57:53.667: INFO: all replica sets need to contain the pod-template-hash label Apr 30 13:57:53.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 47, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:57:55.666: INFO: all replica sets need to contain the pod-template-hash label Apr 30 13:57:55.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 57, 47, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 57, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 30 13:57:57.669: INFO: Apr 30 13:57:57.669: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 13:57:57.682: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5733 a9ff7faa-f02e-4b2c-bcbd-c4a798756840 16289 2 2022-04-30 13:57:43 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-30 13:57:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:57:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000861018 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-30 13:57:43 +0000 UTC,LastTransitionTime:2022-04-30 13:57:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668b7f667d" has successfully progressed.,LastUpdateTime:2022-04-30 13:57:57 +0000 UTC,LastTransitionTime:2022-04-30 13:57:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 30 13:57:57.686: INFO: New ReplicaSet "test-rollover-deployment-668b7f667d" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668b7f667d deployment-5733 3977b5ec-8ff3-4f68-afe0-6fcd4c8381aa 16279 2 2022-04-30 13:57:45 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a9ff7faa-f02e-4b2c-bcbd-c4a798756840 0xc0041aa117 0xc0041aa118}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:57:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9ff7faa-f02e-4b2c-bcbd-c4a798756840\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:57:57 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668b7f667d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0041aa298 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:57:57.686: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 30 13:57:57.686: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5733 7b7f9fed-1491-4d6c-8356-79ace7c64644 16288 2 2022-04-30 13:57:36 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a9ff7faa-f02e-4b2c-bcbd-c4a798756840 0xc003c19fd7 0xc003c19fd8}] [] [{e2e.test Update apps/v1 2022-04-30 13:57:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:57:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9ff7faa-f02e-4b2c-bcbd-c4a798756840\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:57:57 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041aa0a8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:57:57.686: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-5733 d2a700d0-0190-4f57-9956-c9dbc75d7e52 16169 2 2022-04-30 13:57:43 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a9ff7faa-f02e-4b2c-bcbd-c4a798756840 0xc0041aa3a7 0xc0041aa3a8}] [] [{kube-controller-manager Update apps/v1 2022-04-30 13:57:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9ff7faa-f02e-4b2c-bcbd-c4a798756840\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 13:57:45 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0041aa848 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 30 13:57:57.689: INFO: Pod "test-rollover-deployment-668b7f667d-4p4sg" is available: &Pod{ObjectMeta:{test-rollover-deployment-668b7f667d-4p4sg test-rollover-deployment-668b7f667d- deployment-5733 a52a3d86-3a48-491e-8583-976be6c7e0a3 16183 0 2022-04-30 13:57:45 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668b7f667d 3977b5ec-8ff3-4f68-afe0-6fcd4c8381aa 0xc0041abb27 0xc0041abb28}] [] [{kube-controller-manager Update v1 2022-04-30 13:57:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3977b5ec-8ff3-4f68-afe0-6fcd4c8381aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 13:57:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.147\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pbtnc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbtnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:57:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:57:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:57:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 13:57:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.147,StartTime:2022-04-30 13:57:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-30 13:57:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://b0e3cf56e2cb4f59d6dc0f68ea7de42a341a2223354a7de0aaf344d1cbc0c63f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.147,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:57.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-5733" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":49,"skipped":976,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:44.417: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Apr 30 13:57:44.435: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:57:58.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7546" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":24,"skipped":357,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:57.716: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Apr 30 13:57:57.748: INFO: Waiting up to 5m0s for pod "pod-b4af3afc-193f-47a2-ab86-112d112933ed" in namespace "emptydir-8048" to be "Succeeded or Failed" Apr 30 13:57:57.757: INFO: Pod "pod-b4af3afc-193f-47a2-ab86-112d112933ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5366ms Apr 30 13:57:59.761: INFO: Pod "pod-b4af3afc-193f-47a2-ab86-112d112933ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012847305s Apr 30 13:58:01.766: INFO: Pod "pod-b4af3afc-193f-47a2-ab86-112d112933ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017044223s �[1mSTEP�[0m: Saw pod success Apr 30 13:58:01.766: INFO: Pod "pod-b4af3afc-193f-47a2-ab86-112d112933ed" satisfied condition "Succeeded or Failed" Apr 30 13:58:01.769: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-b4af3afc-193f-47a2-ab86-112d112933ed container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:58:01.783: INFO: Waiting for pod pod-b4af3afc-193f-47a2-ab86-112d112933ed to disappear Apr 30 13:58:01.786: INFO: Pod pod-b4af3afc-193f-47a2-ab86-112d112933ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:01.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8048" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":987,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:01.864: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c Apr 30 13:58:01.891: INFO: Pod name my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c: Found 0 pods out of 1 Apr 30 13:58:06.896: INFO: Pod name my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c: Found 1 pods out of 1 Apr 30 13:58:06.896: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c" are running Apr 30 13:58:06.898: INFO: Pod "my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c-4j5tb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:58:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:58:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:58:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-30 13:58:01 +0000 UTC Reason: Message:}]) Apr 30 13:58:06.898: INFO: Trying to dial the pod Apr 30 13:58:11.907: INFO: Controller my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c: Got expected result from replica 1 [my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c-4j5tb]: "my-hostname-basic-b6f54680-d136-4685-8b68-fcee951c151c-4j5tb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:11.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-3439" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":51,"skipped":1045,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:57:58.673: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-2606 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 30 13:57:58.693: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 30 13:57:58.742: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:58:00.746: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:02.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:04.747: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:06.747: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:08.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:10.746: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:12.746: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:14.747: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:16.746: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 30 13:58:18.751: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 30 13:58:18.756: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 30 13:58:18.762: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 30 13:58:18.770: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 30 13:58:20.797: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 30 13:58:20.797: INFO: Going to poll 192.168.2.140 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:58:20.799: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.140 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2606 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:58:20.799: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:58:20.800: INFO: ExecWithOptions: Clientset creation Apr 30 13:58:20.800: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-2606/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.2.140+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:58:21.866: INFO: Found all 1 expected endpoints: [netserver-0] Apr 30 13:58:21.866: INFO: Going to poll 192.168.0.101 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:58:21.869: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.101 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2606 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:58:21.869: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:58:21.870: INFO: ExecWithOptions: Clientset creation Apr 30 13:58:21.870: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-2606/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.0.101+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:58:22.946: INFO: Found all 1 expected endpoints: [netserver-1] Apr 30 13:58:22.946: INFO: Going to poll 192.168.3.99 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:58:22.949: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.3.99 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2606 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:58:22.949: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:58:22.950: INFO: ExecWithOptions: Clientset creation Apr 30 13:58:22.950: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-2606/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.3.99+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:58:24.026: INFO: Found all 1 expected endpoints: [netserver-2] Apr 30 13:58:24.026: INFO: Going to poll 192.168.6.149 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 30 13:58:24.029: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.6.149 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2606 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:58:24.029: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:58:24.030: INFO: ExecWithOptions: Clientset creation Apr 30 13:58:24.030: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-2606/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.6.149+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:58:25.126: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:25.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-2606" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":366,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:11.965: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4626.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4626.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 30 13:58:14.014: INFO: DNS probes using dns-test-8767bfb2-40cb-4b2c-b831-58dc181b4e6d succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4626.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4626.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 30 13:58:16.047: INFO: File wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:16.050: INFO: File jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:16.050: INFO: Lookups using dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c failed for: [wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local] Apr 30 13:58:21.055: INFO: File wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:21.059: INFO: File jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:21.059: INFO: Lookups using dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c failed for: [wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local] Apr 30 13:58:26.055: INFO: File wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:26.058: INFO: File jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:26.058: INFO: Lookups using dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c failed for: [wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local] Apr 30 13:58:31.055: INFO: File wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:31.059: INFO: File jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:31.059: INFO: Lookups using dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c failed for: [wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local] Apr 30 13:58:36.055: INFO: File wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:36.058: INFO: File jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:36.058: INFO: Lookups using dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c failed for: [wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local] Apr 30 13:58:41.055: INFO: File wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:41.058: INFO: File jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local from pod dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 30 13:58:41.058: INFO: Lookups using dns-4626/dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c failed for: [wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local] Apr 30 13:58:46.059: INFO: DNS probes using dns-test-6a89b082-ba4e-49c0-9987-1b587a0b8e4c succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4626.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4626.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4626.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4626.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 30 13:58:48.110: INFO: DNS probes using dns-test-72b49226-45b0-4b52-a434-eb8b6b94a7a7 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:48.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-4626" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":52,"skipped":1089,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:25.194: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-lhwp �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 30 13:58:25.222: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lhwp" in namespace "subpath-8416" to be "Succeeded or Failed" Apr 30 13:58:25.224: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240903ms Apr 30 13:58:27.228: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 2.005971706s Apr 30 13:58:29.232: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 4.010633687s Apr 30 13:58:31.236: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 6.014454605s Apr 30 13:58:33.241: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 8.018810335s Apr 30 13:58:35.245: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 10.023478203s Apr 30 13:58:37.249: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 12.027144943s Apr 30 13:58:39.253: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 14.031532622s Apr 30 13:58:41.257: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 16.035685474s Apr 30 13:58:43.261: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 18.039649824s Apr 30 13:58:45.266: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=true. Elapsed: 20.044141567s Apr 30 13:58:47.272: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Running", Reason="", readiness=false. Elapsed: 22.05054925s Apr 30 13:58:49.277: INFO: Pod "pod-subpath-test-downwardapi-lhwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055232682s �[1mSTEP�[0m: Saw pod success Apr 30 13:58:49.277: INFO: Pod "pod-subpath-test-downwardapi-lhwp" satisfied condition "Succeeded or Failed" Apr 30 13:58:49.280: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx pod pod-subpath-test-downwardapi-lhwp container test-container-subpath-downwardapi-lhwp: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:58:49.305: INFO: Waiting for pod pod-subpath-test-downwardapi-lhwp to disappear Apr 30 13:58:49.308: INFO: Pod pod-subpath-test-downwardapi-lhwp no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-lhwp Apr 30 13:58:49.308: INFO: Deleting pod "pod-subpath-test-downwardapi-lhwp" in namespace "subpath-8416" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:49.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-8416" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":26,"skipped":415,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:48.192: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 30 13:58:48.232: INFO: Waiting up to 5m0s for pod "downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb" in namespace "downward-api-8131" to be "Succeeded or Failed" Apr 30 13:58:48.235: INFO: Pod "downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.151885ms Apr 30 13:58:50.239: INFO: Pod "downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007497236s Apr 30 13:58:52.243: INFO: Pod "downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011503637s �[1mSTEP�[0m: Saw pod success Apr 30 13:58:52.243: INFO: Pod "downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb" satisfied condition "Succeeded or Failed" Apr 30 13:58:52.246: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:58:52.256: INFO: Waiting for pod downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb to disappear Apr 30 13:58:52.259: INFO: Pod downward-api-34587692-6266-4d65-85ba-d6caea0c0ffb no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:52.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8131" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1109,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:52.323: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: Gathering metrics Apr 30 13:58:53.405: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-i77nai-control-plane-r7q6n is Running (Ready = true) Apr 30 13:58:53.554: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:53.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4530" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":54,"skipped":1153,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:53.573: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:58:53.600: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-bacfaaca-64ff-4ae9-bb1d-ef087ef20c7b" in namespace "security-context-test-9027" to be "Succeeded or Failed" Apr 30 13:58:53.603: INFO: Pod "busybox-readonly-false-bacfaaca-64ff-4ae9-bb1d-ef087ef20c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300629ms Apr 30 13:58:55.606: INFO: Pod "busybox-readonly-false-bacfaaca-64ff-4ae9-bb1d-ef087ef20c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005989059s Apr 30 13:58:57.611: INFO: Pod "busybox-readonly-false-bacfaaca-64ff-4ae9-bb1d-ef087ef20c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011024643s Apr 30 13:58:57.611: INFO: Pod "busybox-readonly-false-bacfaaca-64ff-4ae9-bb1d-ef087ef20c7b" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:58:57.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-9027" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":1158,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:57.659: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Apr 30 13:58:57.694: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:57.694: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:57.699: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:57.699: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:57.713: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:57.713: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:57.745: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:57.745: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 30 13:58:58.483: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 30 13:58:58.483: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 30 13:58:58.524: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Apr 30 13:58:58.535: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.537: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 0 Apr 30 13:58:58.538: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:58:58.538: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:58:58.538: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.538: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.538: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.538: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.544: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.544: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.560: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.560: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:58.566: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:58:58.566: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:58:59.553: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:59.553: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:58:59.570: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Apr 30 13:58:59.573: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Apr 30 13:58:59.583: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Apr 30 13:58:59.590: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:58:59.591: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:58:59.605: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:58:59.619: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:58:59.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:59:00.493: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:59:00.545: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:59:00.562: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:59:00.590: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 30 13:59:01.603: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Apr 30 13:59:01.632: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:59:01.632: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 1 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 3 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 2 Apr 30 13:59:01.633: INFO: observed Deployment test-deployment in namespace deployment-2768 with ReadyReplicas 3 �[1mSTEP�[0m: deleting the Deployment Apr 30 13:59:01.645: INFO: observed event type MODIFIED Apr 30 13:59:01.646: INFO: observed event type MODIFIED Apr 30 13:59:01.646: INFO: observed event type MODIFIED Apr 30 13:59:01.646: INFO: observed event type MODIFIED Apr 30 13:59:01.646: INFO: observed event type MODIFIED Apr 30 13:59:01.646: INFO: observed event type MODIFIED Apr 30 13:59:01.646: INFO: observed event type MODIFIED Apr 30 13:59:01.647: INFO: observed event type MODIFIED Apr 30 13:59:01.647: INFO: observed event type MODIFIED Apr 30 13:59:01.647: INFO: observed event type MODIFIED Apr 30 13:59:01.647: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 13:59:01.650: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:01.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2768" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":56,"skipped":1175,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:58:49.348: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-3473 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Looking for a node to schedule stateful set and pod �[1mSTEP�[0m: Creating pod with conflicting port in namespace statefulset-3473 �[1mSTEP�[0m: Waiting until pod test-pod will start running in namespace statefulset-3473 �[1mSTEP�[0m: Creating statefulset with conflicting port in namespace statefulset-3473 �[1mSTEP�[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3473 Apr 30 13:58:51.410: INFO: Observed stateful pod in namespace: statefulset-3473, name: ss-0, uid: 0bf88944-1bc1-4bd0-a29f-fbab97592ca5, status phase: Pending. Waiting for statefulset controller to delete. Apr 30 13:58:51.422: INFO: Observed stateful pod in namespace: statefulset-3473, name: ss-0, uid: 0bf88944-1bc1-4bd0-a29f-fbab97592ca5, status phase: Failed. Waiting for statefulset controller to delete. Apr 30 13:58:51.429: INFO: Observed stateful pod in namespace: statefulset-3473, name: ss-0, uid: 0bf88944-1bc1-4bd0-a29f-fbab97592ca5, status phase: Failed. Waiting for statefulset controller to delete. Apr 30 13:58:51.431: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3473 �[1mSTEP�[0m: Removing pod with conflicting port in namespace statefulset-3473 �[1mSTEP�[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3473 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 30 13:58:53.445: INFO: Deleting all statefulset in ns statefulset-3473 Apr 30 13:58:53.447: INFO: Scaling statefulset ss to 0 Apr 30 13:59:03.462: INFO: Waiting for statefulset status.replicas updated to 0 Apr 30 13:59:03.465: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:03.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-3473" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":27,"skipped":429,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:01.712: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Apr 30 13:59:01.746: INFO: Waiting up to 5m0s for pod "pod-de47d589-5bc4-483e-b370-db9d03e1515b" in namespace "emptydir-9534" to be "Succeeded or Failed" Apr 30 13:59:01.749: INFO: Pod "pod-de47d589-5bc4-483e-b370-db9d03e1515b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.987468ms Apr 30 13:59:03.754: INFO: Pod "pod-de47d589-5bc4-483e-b370-db9d03e1515b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007829277s Apr 30 13:59:05.759: INFO: Pod "pod-de47d589-5bc4-483e-b370-db9d03e1515b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012267808s �[1mSTEP�[0m: Saw pod success Apr 30 13:59:05.759: INFO: Pod "pod-de47d589-5bc4-483e-b370-db9d03e1515b" satisfied condition "Succeeded or Failed" Apr 30 13:59:05.761: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-de47d589-5bc4-483e-b370-db9d03e1515b container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:59:05.938: INFO: Waiting for pod pod-de47d589-5bc4-483e-b370-db9d03e1515b to disappear Apr 30 13:59:05.941: INFO: Pod pod-de47d589-5bc4-483e-b370-db9d03e1515b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:05.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-9534" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1203,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:03.495: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:59:04.127: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:59:07.148: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:07.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1221" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1221-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":28,"skipped":437,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:54:08.377: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:08.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-6615" for this suite. �[32m• [SLOW TEST:300.047 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":25,"skipped":556,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:05.990: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:59:06.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3" in namespace "downward-api-8852" to be "Succeeded or Failed" Apr 30 13:59:06.019: INFO: Pod "downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463282ms Apr 30 13:59:08.022: INFO: Pod "downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006157804s Apr 30 13:59:10.027: INFO: Pod "downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010536561s �[1mSTEP�[0m: Saw pod success Apr 30 13:59:10.027: INFO: Pod "downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3" satisfied condition "Succeeded or Failed" Apr 30 13:59:10.029: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:59:10.043: INFO: Waiting for pod downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3 to disappear Apr 30 13:59:10.045: INFO: Pod downwardapi-volume-5488e2c2-a216-4a18-9998-b05efe1cc1f3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:10.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8852" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1232,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:08.454: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:59:08.525: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ee532935-9a93-426a-8515-79091bed5dfe" in namespace "security-context-test-7275" to be "Succeeded or Failed" Apr 30 13:59:08.532: INFO: Pod "busybox-user-65534-ee532935-9a93-426a-8515-79091bed5dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.712055ms Apr 30 13:59:10.535: INFO: Pod "busybox-user-65534-ee532935-9a93-426a-8515-79091bed5dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009741183s Apr 30 13:59:12.538: INFO: Pod "busybox-user-65534-ee532935-9a93-426a-8515-79091bed5dfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013063338s Apr 30 13:59:12.538: INFO: Pod "busybox-user-65534-ee532935-9a93-426a-8515-79091bed5dfe" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:12.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-7275" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":580,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:10.077: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:59:10.098: INFO: Creating pod... Apr 30 13:59:10.109: INFO: Pod Quantity: 1 Status: Pending Apr 30 13:59:11.114: INFO: Pod Quantity: 1 Status: Pending Apr 30 13:59:12.113: INFO: Pod Quantity: 1 Status: Pending Apr 30 13:59:13.113: INFO: Pod Status: Running Apr 30 13:59:13.113: INFO: Creating service... Apr 30 13:59:13.125: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/pods/agnhost/proxy/some/path/with/DELETE Apr 30 13:59:13.129: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 30 13:59:13.129: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/pods/agnhost/proxy/some/path/with/GET Apr 30 13:59:13.132: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 30 13:59:13.132: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/pods/agnhost/proxy/some/path/with/HEAD Apr 30 13:59:13.135: INFO: http.Client request:HEAD | StatusCode:200 Apr 30 13:59:13.135: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/pods/agnhost/proxy/some/path/with/OPTIONS Apr 30 13:59:13.137: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 30 13:59:13.137: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/pods/agnhost/proxy/some/path/with/PATCH Apr 30 13:59:13.139: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 30 13:59:13.139: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/pods/agnhost/proxy/some/path/with/POST Apr 30 13:59:13.142: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 30 13:59:13.142: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/pods/agnhost/proxy/some/path/with/PUT Apr 30 13:59:13.144: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Apr 30 13:59:13.144: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/services/test-service/proxy/some/path/with/DELETE Apr 30 13:59:13.147: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 30 13:59:13.147: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/services/test-service/proxy/some/path/with/GET Apr 30 13:59:13.151: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 30 13:59:13.151: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/services/test-service/proxy/some/path/with/HEAD Apr 30 13:59:13.153: INFO: http.Client request:HEAD | StatusCode:200 Apr 30 13:59:13.153: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/services/test-service/proxy/some/path/with/OPTIONS Apr 30 13:59:13.156: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 30 13:59:13.156: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/services/test-service/proxy/some/path/with/PATCH Apr 30 13:59:13.159: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 30 13:59:13.159: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/services/test-service/proxy/some/path/with/POST Apr 30 13:59:13.162: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 30 13:59:13.162: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-2178/services/test-service/proxy/some/path/with/PUT Apr 30 13:59:13.165: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:13.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-2178" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":59,"skipped":1248,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:07.469: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:59:08.302: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Apr 30 13:59:10.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 59, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 59, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 59, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 59, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:59:13.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:13.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9031" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9031-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":29,"skipped":483,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:12.564: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Apr 30 13:59:15.103: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3734 pod-service-account-e09075a1-04c2-49a6-9135-46e74743e2e6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Apr 30 13:59:15.253: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3734 pod-service-account-e09075a1-04c2-49a6-9135-46e74743e2e6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Apr 30 13:59:15.380: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3734 pod-service-account-e09075a1-04c2-49a6-9135-46e74743e2e6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:15.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-3734" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":27,"skipped":591,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:15.545: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:59:16.185: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:59:19.204: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Apr 30 13:59:21.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-3750 attach --namespace=webhook-3750 to-be-attached-pod -i -c=container1' Apr 30 13:59:21.313: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:21.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3750" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":28,"skipped":595,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:21.414: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Apr 30 13:59:21.439: INFO: Waiting up to 5m0s for pod "pod-89c30dcc-fc05-4d40-aa87-528a7079b307" in namespace "emptydir-3091" to be "Succeeded or Failed" Apr 30 13:59:21.442: INFO: Pod "pod-89c30dcc-fc05-4d40-aa87-528a7079b307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711493ms Apr 30 13:59:23.447: INFO: Pod "pod-89c30dcc-fc05-4d40-aa87-528a7079b307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007324775s Apr 30 13:59:25.450: INFO: Pod "pod-89c30dcc-fc05-4d40-aa87-528a7079b307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01079536s �[1mSTEP�[0m: Saw pod success Apr 30 13:59:25.450: INFO: Pod "pod-89c30dcc-fc05-4d40-aa87-528a7079b307" satisfied condition "Succeeded or Failed" Apr 30 13:59:25.453: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-89c30dcc-fc05-4d40-aa87-528a7079b307 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:59:25.467: INFO: Waiting for pod pod-89c30dcc-fc05-4d40-aa87-528a7079b307 to disappear Apr 30 13:59:25.469: INFO: Pod pod-89c30dcc-fc05-4d40-aa87-528a7079b307 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:25.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3091" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":618,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:25.490: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating Pod �[1mSTEP�[0m: Reading file content from the nginx-container Apr 30 13:59:27.530: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7518 PodName:pod-sharedvolume-1d07c5ff-a263-4b5b-a816-3d6446424eb9 ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:59:27.530: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:59:27.531: INFO: ExecWithOptions: Clientset creation Apr 30 13:59:27.531: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/emptydir-7518/pods/pod-sharedvolume-1d07c5ff-a263-4b5b-a816-3d6446424eb9/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true %!s(MISSING)) Apr 30 13:59:27.641: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:27.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7518" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":30,"skipped":627,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:27.689: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 13:59:27.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7" in namespace "downward-api-8557" to be "Succeeded or Failed" Apr 30 13:59:27.726: INFO: Pod "downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448069ms Apr 30 13:59:29.730: INFO: Pod "downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007274191s Apr 30 13:59:31.734: INFO: Pod "downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01127698s �[1mSTEP�[0m: Saw pod success Apr 30 13:59:31.734: INFO: Pod "downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7" satisfied condition "Succeeded or Failed" Apr 30 13:59:31.738: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:59:31.750: INFO: Waiting for pod downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7 to disappear Apr 30 13:59:31.752: INFO: Pod downwardapi-volume-84e072a2-b794-43dd-9aa7-ac963608c9c7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:31.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8557" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":654,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:31.783: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 30 13:59:31.800: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:59:34.746: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:43.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-8035" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":32,"skipped":669,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:13.186: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Apr 30 13:59:15.217: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7642 PodName:var-expansion-28c1e2fc-dc29-455c-bbcd-ae02feabaa40 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:59:15.217: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:59:15.218: INFO: ExecWithOptions: Clientset creation Apr 30 13:59:15.218: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-7642/pods/var-expansion-28c1e2fc-dc29-455c-bbcd-ae02feabaa40/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: test for file in mounted path Apr 30 13:59:15.285: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7642 PodName:var-expansion-28c1e2fc-dc29-455c-bbcd-ae02feabaa40 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 13:59:15.285: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 13:59:15.286: INFO: ExecWithOptions: Clientset creation Apr 30 13:59:15.286: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-7642/pods/var-expansion-28c1e2fc-dc29-455c-bbcd-ae02feabaa40/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: updating the annotation value Apr 30 13:59:15.872: INFO: Successfully updated pod "var-expansion-28c1e2fc-dc29-455c-bbcd-ae02feabaa40" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Apr 30 13:59:15.875: INFO: Deleting pod "var-expansion-28c1e2fc-dc29-455c-bbcd-ae02feabaa40" in namespace "var-expansion-7642" Apr 30 13:59:15.879: INFO: Wait up to 5m0s for pod "var-expansion-28c1e2fc-dc29-455c-bbcd-ae02feabaa40" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:49.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-7642" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":60,"skipped":1259,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:43.879: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 30 13:59:43.912: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:59:45.916: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 30 13:59:45.927: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:59:47.931: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 30 13:59:47.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 30 13:59:47.941: INFO: Pod pod-with-prestop-exec-hook still exists Apr 30 13:59:49.941: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 30 13:59:49.946: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:49.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-12" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":670,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:49.947: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Apr 30 13:59:49.991: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:50.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-4526" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":61,"skipped":1303,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:50.066: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:50.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-6242" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":62,"skipped":1335,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:13.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-6210 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-6210 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-6210 I0430 13:59:13.502422 17 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-6210, replica count: 3 I0430 13:59:16.553309 17 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 30 13:59:16.558: INFO: Creating new exec pod Apr 30 13:59:19.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6210 exec execpod-affinity9b84b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 30 13:59:19.718: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Apr 30 13:59:19.718: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:59:19.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6210 exec execpod-affinity9b84b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.150.143 80' Apr 30 13:59:19.866: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.135.150.143 80\nConnection to 10.135.150.143 80 port [tcp/http] succeeded!\n" Apr 30 13:59:19.867: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 30 13:59:19.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6210 exec execpod-affinity9b84b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.135.150.143:80/ ; done' Apr 30 13:59:20.129: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n" Apr 30 13:59:20.129: INFO: stdout: "\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-vkvhg" Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.129: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6210 exec execpod-affinity9b84b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.135.150.143:80/ ; done' Apr 30 13:59:20.367: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n" Apr 30 13:59:20.367: INFO: stdout: "\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-vkvhg\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-hrtpp\naffinity-clusterip-transition-hrtpp" Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-vkvhg Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:20.367: INFO: Received response from host: affinity-clusterip-transition-hrtpp Apr 30 13:59:50.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6210 exec execpod-affinity9b84b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.135.150.143:80/ ; done' Apr 30 13:59:50.654: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.135.150.143:80/\n" Apr 30 13:59:50.654: INFO: stdout: "\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s\naffinity-clusterip-transition-4t52s" Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Received response from host: affinity-clusterip-transition-4t52s Apr 30 13:59:50.654: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-6210, will wait for the garbage collector to delete the pods Apr 30 13:59:50.726: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.735265ms Apr 30 13:59:50.826: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.802289ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:52.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6210" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":498,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:52.912: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:52.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-7822" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":527,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:49.976: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-e010a8a9-41d6-4edc-8613-5bace699270b �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 13:59:50.008: INFO: Waiting up to 5m0s for pod "pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae" in namespace "secrets-1780" to be "Succeeded or Failed" Apr 30 13:59:50.011: INFO: Pod "pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.90863ms Apr 30 13:59:52.015: INFO: Pod "pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007489898s Apr 30 13:59:54.020: INFO: Pod "pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011607185s �[1mSTEP�[0m: Saw pod success Apr 30 13:59:54.020: INFO: Pod "pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae" satisfied condition "Succeeded or Failed" Apr 30 13:59:54.022: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 13:59:54.037: INFO: Waiting for pod pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae to disappear Apr 30 13:59:54.039: INFO: Pod pod-secrets-5c114531-f3c3-41ca-9e71-432d64954fae no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:54.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1780" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":676,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:56:09.243: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating all guestbook components Apr 30 13:56:09.272: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Apr 30 13:56:09.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 create -f -' Apr 30 13:56:09.955: INFO: stderr: "" Apr 30 13:56:09.955: INFO: stdout: "service/agnhost-replica created\n" Apr 30 13:56:09.955: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Apr 30 13:56:09.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 create -f -' Apr 30 13:56:10.214: INFO: stderr: "" Apr 30 13:56:10.214: INFO: stdout: "service/agnhost-primary created\n" Apr 30 13:56:10.215: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 30 13:56:10.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 create -f -' Apr 30 13:56:10.398: INFO: stderr: "" Apr 30 13:56:10.398: INFO: stdout: "service/frontend created\n" Apr 30 13:56:10.399: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.33 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 30 13:56:10.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 create -f -' Apr 30 13:56:10.572: INFO: stderr: "" Apr 30 13:56:10.572: INFO: stdout: "deployment.apps/frontend created\n" Apr 30 13:56:10.572: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.33 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 30 13:56:10.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 create -f -' Apr 30 13:56:10.801: INFO: stderr: "" Apr 30 13:56:10.801: INFO: stdout: "deployment.apps/agnhost-primary created\n" Apr 30 13:56:10.801: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.33 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 30 13:56:10.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 create -f -' Apr 30 13:56:10.988: INFO: stderr: "" Apr 30 13:56:10.988: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Apr 30 13:56:10.988: INFO: Waiting for all frontend pods to be Running. Apr 30 13:56:16.039: INFO: Waiting for frontend to serve content. Apr 30 13:59:50.661: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:46294->192.168.3.95:80: read: connection reset by peer"�ServiceUnavailable0����"� Apr 30 13:59:55.674: INFO: Trying to add a new entry to the guestbook. Apr 30 13:59:55.685: INFO: Verifying that added entry can be retrieved. �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:59:55.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 delete --grace-period=0 --force -f -' Apr 30 13:59:55.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:59:55.785: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:59:55.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 delete --grace-period=0 --force -f -' Apr 30 13:59:55.903: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:59:55.903: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:59:55.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 delete --grace-period=0 --force -f -' Apr 30 13:59:55.998: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:59:55.999: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:59:55.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 delete --grace-period=0 --force -f -' Apr 30 13:59:56.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:59:56.070: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:59:56.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 delete --grace-period=0 --force -f -' Apr 30 13:59:56.180: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:59:56.180: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 30 13:59:56.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7608 delete --grace-period=0 --force -f -' Apr 30 13:59:56.283: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 30 13:59:56.283: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:56.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7608" for this suite. �[32m• [SLOW TEST:227.054 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Guestbook application �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339�[0m should create and stop a working application [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":44,"skipped":1032,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:50.111: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Apr 30 13:59:50.140: INFO: The status of Pod labelsupdate474961d9-a1a6-4a0b-a484-702ecc8b77cf is Pending, waiting for it to be Running (with Ready = true) Apr 30 13:59:52.145: INFO: The status of Pod labelsupdate474961d9-a1a6-4a0b-a484-702ecc8b77cf is Running (Ready = true) Apr 30 13:59:52.665: INFO: Successfully updated pod "labelsupdate474961d9-a1a6-4a0b-a484-702ecc8b77cf" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 13:59:56.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6415" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1342,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:56.377: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-2949/configmap-test-8657c8c4-2cf4-4d1f-85d8-cc7c60c5e3b3 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 13:59:56.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515" in namespace "configmap-2949" to be "Succeeded or Failed" Apr 30 13:59:56.422: INFO: Pod "pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515": Phase="Pending", Reason="", readiness=false. Elapsed: 5.900672ms Apr 30 13:59:58.427: INFO: Pod "pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010810554s Apr 30 14:00:00.431: INFO: Pod "pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014209882s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:00.431: INFO: Pod "pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515" satisfied condition "Succeeded or Failed" Apr 30 14:00:00.433: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx pod pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515 container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:00.447: INFO: Waiting for pod pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515 to disappear Apr 30 14:00:00.449: INFO: Pod pod-configmaps-77fd69f9-5956-4473-9610-64aa8d8bd515 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:00.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2949" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":1071,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:52.961: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:59:53.283: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 30 13:59:55.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 30, 13, 59, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 59, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 30, 13, 59, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 30, 13, 59, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-bb9577b7b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 13:59:58.307: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 13:59:58.311: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:01.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-9282" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":32,"skipped":531,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 13:59:56.718: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 13:59:57.167: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 14:00:00.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:00.189: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be denied by the webhook �[1mSTEP�[0m: Creating a custom resource whose deletion would be denied by the webhook �[1mSTEP�[0m: Updating the custom resource with disallowed data should be denied �[1mSTEP�[0m: Deleting the custom resource should be denied �[1mSTEP�[0m: Remove the offending key and value from the custom resource data �[1mSTEP�[0m: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:03.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6687" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6687-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":64,"skipped":1357,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:01.496: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:01.555: INFO: The status of Pod pod-secrets-0014f8eb-5fa9-4890-a21c-3d4b19b0f638 is Pending, waiting for it to be Running (with Ready = true) Apr 30 14:00:03.559: INFO: The status of Pod pod-secrets-0014f8eb-5fa9-4890-a21c-3d4b19b0f638 is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:03.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-9880" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":33,"skipped":547,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:00.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-2db62878-a4f8-4072-8487-209536a2ecf8 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 30 14:00:00.539: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851" in namespace "projected-6359" to be "Succeeded or Failed" Apr 30 14:00:00.542: INFO: Pod "pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109709ms Apr 30 14:00:02.546: INFO: Pod "pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006680916s Apr 30 14:00:04.551: INFO: Pod "pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010967792s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:04.551: INFO: Pod "pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851" satisfied condition "Succeeded or Failed" Apr 30 14:00:04.554: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:04.579: INFO: Waiting for pod pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851 to disappear Apr 30 14:00:04.581: INFO: Pod pod-projected-configmaps-96d5bee3-9419-44a5-b5b1-69c2442ab851 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:04.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6359" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":1113,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:04.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 30 14:00:04.631: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 30 14:00:04.635: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 30 14:00:04.649: INFO: waiting for watch events with expected annotations Apr 30 14:00:04.649: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:04.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-3052" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":47,"skipped":1115,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:03.610: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Apr 30 14:00:03.637: INFO: Waiting up to 5m0s for pod "pod-24b9c07b-5075-46af-9cc1-38af82d35d05" in namespace "emptydir-1609" to be "Succeeded or Failed" Apr 30 14:00:03.639: INFO: Pod "pod-24b9c07b-5075-46af-9cc1-38af82d35d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423541ms Apr 30 14:00:05.643: INFO: Pod "pod-24b9c07b-5075-46af-9cc1-38af82d35d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006240962s Apr 30 14:00:07.647: INFO: Pod "pod-24b9c07b-5075-46af-9cc1-38af82d35d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009691215s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:07.647: INFO: Pod "pod-24b9c07b-5075-46af-9cc1-38af82d35d05" satisfied condition "Succeeded or Failed" Apr 30 14:00:07.649: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod pod-24b9c07b-5075-46af-9cc1-38af82d35d05 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:07.665: INFO: Waiting for pod pod-24b9c07b-5075-46af-9cc1-38af82d35d05 to disappear Apr 30 14:00:07.667: INFO: Pod pod-24b9c07b-5075-46af-9cc1-38af82d35d05 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:07.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1609" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":564,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:07.709: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:07.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9572" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":35,"skipped":587,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:07.785: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:07.809: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-1263" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":36,"skipped":597,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:04.715: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 14:00:04.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d" in namespace "downward-api-1426" to be "Succeeded or Failed" Apr 30 14:00:04.756: INFO: Pod "downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489434ms Apr 30 14:00:06.760: INFO: Pod "downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008561244s Apr 30 14:00:08.764: INFO: Pod "downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012694034s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:08.764: INFO: Pod "downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d" satisfied condition "Succeeded or Failed" Apr 30 14:00:08.767: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-a2pwxc pod downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:08.784: INFO: Waiting for pod downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d to disappear Apr 30 14:00:08.786: INFO: Pod downwardapi-volume-e35d7acb-b47d-4fea-be18-0aa3b416bc6d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:08.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1426" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1125,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:08.358: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:08.374: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 30 14:00:08.383: INFO: The status of Pod pod-exec-websocket-c1b21911-1ccd-4886-a256-bbe54e18a317 is Pending, waiting for it to be Running (with Ready = true) Apr 30 14:00:10.387: INFO: The status of Pod pod-exec-websocket-c1b21911-1ccd-4886-a256-bbe54e18a317 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:10.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5419" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":600,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:08.811: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating secret secrets-6162/secret-test-c47253c4-3056-4a27-8e5d-9bd47f520d4e �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 14:00:08.837: INFO: Waiting up to 5m0s for pod "pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9" in namespace "secrets-6162" to be "Succeeded or Failed" Apr 30 14:00:08.840: INFO: Pod "pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.876779ms Apr 30 14:00:10.845: INFO: Pod "pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007449899s Apr 30 14:00:12.851: INFO: Pod "pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013675839s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:12.851: INFO: Pod "pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9" satisfied condition "Succeeded or Failed" Apr 30 14:00:12.855: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-a2pwxc pod pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9 container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:12.872: INFO: Waiting for pod pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9 to disappear Apr 30 14:00:12.874: INFO: Pod pod-configmaps-13a07416-9d3d-47b8-9721-f56ff6fccfa9 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:12.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6162" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1137,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:10.514: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in container's command Apr 30 14:00:10.540: INFO: Waiting up to 5m0s for pod "var-expansion-fed680b8-e366-4323-985a-93664e92876a" in namespace "var-expansion-6756" to be "Succeeded or Failed" Apr 30 14:00:10.542: INFO: Pod "var-expansion-fed680b8-e366-4323-985a-93664e92876a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475523ms Apr 30 14:00:12.546: INFO: Pod "var-expansion-fed680b8-e366-4323-985a-93664e92876a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006284802s Apr 30 14:00:14.551: INFO: Pod "var-expansion-fed680b8-e366-4323-985a-93664e92876a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011008873s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:14.551: INFO: Pod "var-expansion-fed680b8-e366-4323-985a-93664e92876a" satisfied condition "Succeeded or Failed" Apr 30 14:00:14.553: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-6q75m pod var-expansion-fed680b8-e366-4323-985a-93664e92876a container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:14.566: INFO: Waiting for pod var-expansion-fed680b8-e366-4323-985a-93664e92876a to disappear Apr 30 14:00:14.569: INFO: Pod var-expansion-fed680b8-e366-4323-985a-93664e92876a no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:14.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-6756" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":638,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:12.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 30 14:00:12.944: INFO: The status of Pod pod-update-69431813-9668-4e01-877d-bd8420b070ed is Pending, waiting for it to be Running (with Ready = true) Apr 30 14:00:14.948: INFO: The status of Pod pod-update-69431813-9668-4e01-877d-bd8420b070ed is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Apr 30 14:00:15.462: INFO: Successfully updated pod "pod-update-69431813-9668-4e01-877d-bd8420b070ed" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Apr 30 14:00:15.468: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:15.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-336" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1161,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:14.579: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 30 14:00:15.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 30 14:00:18.030: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:18.033: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-5079-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:21.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":39,"skipped":639,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:15.488: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Service �[1mSTEP�[0m: Creating a NodePort Service �[1mSTEP�[0m: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota �[1mSTEP�[0m: Ensuring resource quota status captures service creation �[1mSTEP�[0m: Deleting Services �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:26.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1985" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":51,"skipped":1162,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:21.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:27.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-6345" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":40,"skipped":655,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:26.666: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-6051e9fc-4a03-48c1-b614-bedbc361dd8c �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 14:00:26.698: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108" in namespace "projected-709" to be "Succeeded or Failed" Apr 30 14:00:26.700: INFO: Pod "pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337793ms Apr 30 14:00:28.704: INFO: Pod "pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006418385s Apr 30 14:00:30.709: INFO: Pod "pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011304009s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:30.709: INFO: Pod "pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108" satisfied condition "Succeeded or Failed" Apr 30 14:00:30.712: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-a2pwxc pod pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:30.726: INFO: Waiting for pod pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108 to disappear Apr 30 14:00:30.728: INFO: Pod pod-projected-secrets-078b0ff4-d40e-4edc-8b7a-1c2db9178108 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:30.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-709" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":1188,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:27.399: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:27.428: INFO: The status of Pod server-envvars-baba57d1-6ad2-4242-bbd4-68d5238aff81 is Pending, waiting for it to be Running (with Ready = true) Apr 30 14:00:29.432: INFO: The status of Pod server-envvars-baba57d1-6ad2-4242-bbd4-68d5238aff81 is Running (Ready = true) Apr 30 14:00:29.450: INFO: Waiting up to 5m0s for pod "client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e" in namespace "pods-4903" to be "Succeeded or Failed" Apr 30 14:00:29.453: INFO: Pod "client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907468ms Apr 30 14:00:31.457: INFO: Pod "client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006075919s Apr 30 14:00:33.461: INFO: Pod "client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010066735s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:33.461: INFO: Pod "client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e" satisfied condition "Succeeded or Failed" Apr 30 14:00:33.463: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-o9uwcm pod client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e container env3cont: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:33.476: INFO: Waiting for pod client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e to disappear Apr 30 14:00:33.478: INFO: Pod client-envvars-5b40ac73-39bb-4df8-95d4-2111f3f6b92e no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:33.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4903" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":661,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:30.742: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 30 14:00:30.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682" in namespace "projected-8404" to be "Succeeded or Failed" Apr 30 14:00:30.770: INFO: Pod "downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521806ms Apr 30 14:00:32.775: INFO: Pod "downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007102381s Apr 30 14:00:34.779: INFO: Pod "downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011007467s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:34.779: INFO: Pod "downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682" satisfied condition "Succeeded or Failed" Apr 30 14:00:34.781: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-worker-a2pwxc pod downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:34.793: INFO: Waiting for pod downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682 to disappear Apr 30 14:00:34.795: INFO: Pod downwardapi-volume-528a2334-f780-4860-9d32-82f3740dd682 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:34.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8404" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1192,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:33.521: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:33.542: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Apr 30 14:00:35.568: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:36.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-4108" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":42,"skipped":688,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:34.828: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod Apr 30 14:00:34.865: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Apr 30 14:00:36.871: INFO: The status of Pod test-pod is Running (Ready = true) �[1mSTEP�[0m: Creating hostNetwork=true pod Apr 30 14:00:36.882: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Apr 30 14:00:38.886: INFO: The status of Pod test-host-network-pod is Running (Ready = true) �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 30 14:00:38.889: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:38.889: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:38.890: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:38.890: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:38.967: INFO: Exec stderr: "" Apr 30 14:00:38.967: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:38.967: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:38.968: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:38.968: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.055: INFO: Exec stderr: "" Apr 30 14:00:39.055: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.055: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.056: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.056: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.144: INFO: Exec stderr: "" Apr 30 14:00:39.144: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.144: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.144: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.144: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.244: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 30 14:00:39.244: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.244: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.245: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.245: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.297: INFO: Exec stderr: "" Apr 30 14:00:39.297: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.297: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.298: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.298: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.368: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 30 14:00:39.368: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.368: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.369: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.369: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.458: INFO: Exec stderr: "" Apr 30 14:00:39.458: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.458: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.458: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.458: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.537: INFO: Exec stderr: "" Apr 30 14:00:39.538: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.538: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.538: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.538: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.613: INFO: Exec stderr: "" Apr 30 14:00:39.613: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4837 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 30 14:00:39.613: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 30 14:00:39.613: INFO: ExecWithOptions: Clientset creation Apr 30 14:00:39.614: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-4837/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Apr 30 14:00:39.706: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:39.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-4837" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1211,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:36.592: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-e8a4f6bf-cb9f-44f6-ad25-e7b88a335cff �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 30 14:00:36.619: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218" in namespace "projected-7963" to be "Succeeded or Failed" Apr 30 14:00:36.621: INFO: Pod "pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218": Phase="Pending", Reason="", readiness=false. Elapsed: 2.690575ms Apr 30 14:00:38.624: INFO: Pod "pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005716336s Apr 30 14:00:40.628: INFO: Pod "pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009860465s �[1mSTEP�[0m: Saw pod success Apr 30 14:00:40.629: INFO: Pod "pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218" satisfied condition "Succeeded or Failed" Apr 30 14:00:40.631: INFO: Trying to get logs from node k8s-upgrade-and-conformance-i77nai-md-0-65465d95c4-ctsmx pod pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 30 14:00:40.646: INFO: Waiting for pod pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218 to disappear Apr 30 14:00:40.648: INFO: Pod pod-projected-secrets-24c60003-17bd-4801-836a-cdf7eec31218 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:40.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7963" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":693,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:40.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Starting the proxy Apr 30 14:00:40.691: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9976 proxy --unix-socket=/tmp/kubectl-proxy-unix3883055616/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:40.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9976" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":44,"skipped":705,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 30 14:00:39.771: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 30 14:00:39.789: INFO: Creating deployment "test-recreate-deployment" Apr 30 14:00:39.795: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 30 14:00:39.801: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 30 14:00:41.809: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 30 14:00:41.811: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 30 14:00:41.822: INFO: Updating deployment test-recreate-deployment Apr 30 14:00:41.822: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 30 14:00:41.877: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7501 044296bf-bea3-4538-990f-5980c0d4101b 19181 2 2022-04-30 14:00:39 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-30 14:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 14:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0046330f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-30 14:00:41 +0000 UTC,LastTransitionTime:2022-04-30 14:00:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5b99bd5487" is progressing.,LastUpdateTime:2022-04-30 14:00:41 +0000 UTC,LastTransitionTime:2022-04-30 14:00:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 30 14:00:41.880: INFO: New ReplicaSet "test-recreate-deployment-5b99bd5487" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5b99bd5487 deployment-7501 07a9ca1d-ca5d-420b-b765-ef321f94ffd9 19180 1 2022-04-30 14:00:41 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 044296bf-bea3-4538-990f-5980c0d4101b 0xc003aa0bb7 0xc003aa0bb8}] [] [{kube-controller-manager Update apps/v1 2022-04-30 14:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"044296bf-bea3-4538-990f-5980c0d4101b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 14:00:41 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5b99bd5487,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003aa0c68 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 30 14:00:41.880: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 30 14:00:41.880: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7d659f7dc9 deployment-7501 18156b00-e328-417d-8508-c9e1ede194b2 19169 2 2022-04-30 14:00:39 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 044296bf-bea3-4538-990f-5980c0d4101b 0xc003aa0cd7 0xc003aa0cd8}] [] [{kube-controller-manager Update apps/v1 2022-04-30 14:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"044296bf-bea3-4538-990f-5980c0d4101b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-30 14:00:41 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d659f7dc9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003aa0d88 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 30 14:00:41.883: INFO: Pod "test-recreate-deployment-5b99bd5487-dgsjh" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-dgsjh test-recreate-deployment-5b99bd5487- deployment-7501 2c38ce16-b56b-4660-8a3f-32119b887f8f 19182 0 2022-04-30 14:00:41 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 07a9ca1d-ca5d-420b-b765-ef321f94ffd9 0xc003aa1227 0xc003aa1228}] [] [{kube-controller-manager Update v1 2022-04-30 14:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07a9ca1d-ca5d-420b-b765-ef321f94ffd9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-30 14:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k4ks7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4ks7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-i77nai-worker-o9uwcm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 14:00:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 14:00:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 14:00:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-30 14:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2022-04-30 14:00:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 30 14:00:41.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7501" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":55,"skipped":1251,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/f